Edgio

AWS S3 Log Delivery

RTLD may automatically deliver compressed log data to an AWS S3 bucket by submitting HTTPS PUT requests to it. Each request adds an object to the bucket. This object contains a compressed JSON or CSV document that uniquely identifies a set of log data and describes one or more log entries.
Key information:
  • The set of available log fields varies by RTLD module: RTLD CDN | RTLD WAF | RTLD Rate Limiting | RTLD Bot | RTLD Cloud Functions
  • RTLD applies gzip compression to log data. AWS S3 stores compressed log data as an object with a gz file extension.
    Learn more.
  • AWS S3 may automatically decompress files downloaded via the S3 Management Console into JSON or CSV files. No additional decompression is required to process this data.
  • RTLD requires a bucket policy that authorizes our service to upload content to your bucket.
  • If you have enabled server-side encryption on the desired AWS S3 bucket, then you must also enable default bucket encryption. Otherwise, RTLD will be unable to post log data to that bucket.
    RTLD does not include Amazon-specific encryption headers when posting log data to your bucket.
    View AWS documentation on default bucket encryption.
  • You may define a prefix when setting up a log delivery profile. This prefix defines a virtual log file storage location and/or a prefix that will be pre-pended to the name of each object added to your bucket. Use the following guidelines when setting this prefix:
    • A prefix should not start with a forward slash.
    • A forward slash within the specified prefix is interpreted as a delimiter for a virtual directory.
    • A trailing forward slash means that the specified value only defines a virtual directory path within your bucket where logs will be stored. If the specified value ends in a character other than a forward slash, then the characters specified after the forward slash will be prepended to the file name for each log file uploaded to your destination.
    Sample prefix: logs/CDN/siteA_
    The above prefix will store log files in the following virtual directory: /logs/CDN
    The file name for each log file uploaded to your destination will start with siteA_.
    Sample log file name: siteA_wpc_0001_123_20220111_50550000F98AB95B_1.json
To prepare for log delivery
  1. Create or identify an AWS S3 bucket to which log data will be posted.
    View AWS documentation on how to create a bucket.
  2. Apply the following bucket policy to the AWS S3 bucket identified in step 1. This bucket policy authorizes our service to upload content to your bucket.
    View AWS documentation on how to add a bucket policy.
    AWS-S3-Bucket-Policy
    1{
    2 "Version": "2012-10-17",
    3 "Statement": [{
    4 "Sid": "CDNRealTimeLogDelivery",
    5 "Effect": "Allow",
    6 "Principal": {
    7 "AWS": "arn:aws:iam::638349102478:user/real-time-log-delivery"
    8 },
    9 "Action": [
    10 "s3:PutObject",
    11 "s3:GetBucketLocation",
    12 "s3:PutObjectTagging",
    13 "s3:PutObjectACL"
    14 ],
    15 "Resource": [
    16 "arn:aws:s3:::BUCKET-NAME",
    17 "arn:aws:s3:::BUCKET-NAME/*"
    18 ]
    19 }
    20 ]
    21}
    Replace the term BUCKET-NAME in lines 16 and 17 with the name of the AWS S3 bucket to which this policy is being applied.
  3. If you have enabled server-side encryption on the AWS S3 bucket identified in step 1, then you must also enable default bucket encryption.
    View AWS documentation on default bucket encryption.
  4. Optional. Set up AWS to process the log data that will be posted to it.
    Example:
    Leverage AWS Lambda to mine specific data from log entries.
  5. Upon completing the above steps, you should create a log delivery profile for AWS S3.
To set up a log delivery profile
  1. From the Real-Time Log Delivery page, click + New Log Delivery Profile and then select the desired type of log field.
    1. Open the desired property.
      1. Select either your private space or a team space.
      2. Click on the desired property.
    2. From the left pane, click on the desired environment.
    3. From the left pane, click Realtime Log Delivery.
    4. Click + New Log Delivery Profile and then select either CDN, WAF, Rate Limiting, Bot, or Cloud Functions.
  2. From the Profile Name option, assign a name to this log delivery profile.
  3. From the Log Delivery Method option, select AWS S3.
  4. Define how RTLD will communicate with AWS S3.
    1. Set the Bucket option to the name of the AWS S3 bucket to which log data will be posted.
    2. Optional. Set the Prefix option to the desired prefix that defines a virtual log file storage location and/or a prefix that will be added to each object added to your bucket.
      Learn more about prefixes.
    3. From the AWS Region option, select the region assigned to the AWS S3 bucket.
  5. From the Log Format option, select whether to format log data using our standard JSON format, as a JSON array, as JSON lines, or as a CSV (RTLD CDN only).
    Learn more about these formats: RTLD CDN | RTLD WAF | RTLD Rate Limiting | RTLD Bot | RTLD Cloud Functions
  6. From the Downsample the Logs option, determine whether to reduce the amount of log data that will be delivered. For example, you may choose to only deliver 1% of your log data.
    • All Log Data: Verify that the Downsample the Logs option is cleared.
    • Downsampled Log Data: Downsample logs to 0.1%, 1%, 25%, 50%, or 75% of total log data by marking the Downsample the Logs option and then selecting the desired rate from the Downsampling Rate option.
      Use this capability to reduce the amount of data that needs to be processed or stored within your web server(s).
      RTLD CDN Only: Downsampling log data also reduces usage charges for this service.
  7. Determine whether log data will be filtered.
  8. By default, all log fields are enabled on a new log delivery profile. From within the Fields section, clear each field for which log data should not be reported.
  9. Click Create Log Delivery Profile.