Log Delivery Service (LDS) collects log data from CDN services and delivers it to a destination of your choice. It seamlessly integrates with platforms like Amazon S3, Google Cloud Storage, Hydrolix, Edgio Origin Storage, and others. LDS offers valuable insights into CDN performance and your traffic patterns.
Log field names, delimiters, date and time format, file name, and directory structure adhere to W3C/ISO standards.
Log Delivery List Page
Navigate to Configure > Log Delivery Service in the navigation pane. The Log Delivery Service page is displayed and initially shows configurations for the Hoccount in the drop-down menu on the right above the list.
Each configuration in the list includes this information:
Field | Description/Instructions |
---|---|
Number of configurations and configuration names | Customer-assigned configuration name. |
SHORTNAME | Currently selected account name. |
SERVICE TYPE | Delivery service for which logs will be created. (HTTP or MMD Live Ingest). |
DESTINATION | Log file location:
|
DATA COMPRESSION | File compression method. Possible values:
|
STATE | Identifies whether the configuration is actively used, suspended, etc. |
STATUS | Configuration status. When you create and save a configuration, it goes through a validation process. Possible status values:
|
LAST UPDATED | Configuration’s creation or last modified date. |
Choose an Account
Each account has its own set of configurations. You can choose an account to work with from the toggle in the top right corner above the list.
This list is more focused than the company/account at the top of the page and is limited to the accounts that your user can access and accounts that have the product enabled.
Create a Log Delivery Configuration
You can create a single configuration for any combination of shortname, destination, and service type.
-
Click the + button at the top of the Log Delivery List Page.The Add Configuration page is displayed, and you will see a message warning you about extra fees if you choose to store logs in.
-
Fill out the fields at the top of the page, noting that required fields are marked with an asterisk in the user interface. See Log Delivery Configuration Fields for details.
-
Select fields to include in log files. See Configuring Log Fields.
-
Save the configuration by clicking the Save button.
It can take 15 to 40 minutes for a new configuration to take effect.
Edit a Log Delivery Configuration
-
Click the configuration’s row on the Log Delivery List page.The configuration is displayed in edit mode. If the configuration’s destination is Origin Storage, you will see a message warning you about extra fees if you choose to store logs in.
- Existing configurations include Directory Layout and FileName Template Fields
- If your user does not have ‘Manage’ permissions for all fields are disabled and you cannot modify the configuration.
-
Modify fields as needed. See Log Delivery Configuration Fields and Configuring Log Fields for details.
-
Save the configuration by clicking the Save button.
- It can take 15 to 40 minutes for changes to take effect.
- Depending on your permissions, you may not be able to edit a configuration.
Configure Log Fields
You can add, remove, and reorder active log fields. You can also add static fields.
Move Fields between Lists
- Drag and drop individual fields from one set to another.
- Move all fields using the button beneath the Selected log fields set.
- Click SELECT ALL to move all fields from the Available log fields set to the Selected log fields set. The button’s text changes to ‘DESELECT ALL’.
- Click DESELECT ALL to move all fields from the Selected set to the Available set. The button’s text changes to ‘SELECT ALL’.
Reorder Selected Fields
Drag and drop individual fields to reorder them.
Work with Static Fields
Static fields are user-defined fields with a value that does not change.
-
To add a static field:
-
Click the ADD STATIC FIELD button; then enter a field name and value in the subsequent dialog,
-
Click ADD ACTIVE FIELD.
The field is added to the Available log fields set. From there you can move it to the Selected log fields set. -
-
To edit or delete a static field:
-
Click the field.
-
In the subsequent dialog enter a new value and click SAVE, or click the DELETE button.
-
Delete a Log Delivery Configuration
-
Click the configuration’s row in the Log Delivery List page.The configuration is displayed.
-
Click the DELETE button at the bottom of the page.
-
Agree to the deletion in the subsequent confirmation dialog.Control deletes the configuration.
It can take 15 to 40 minutes for the deletion to take effect.
Deactivate/Activate a Log Delivery Service Configuration
Deactivate
You can deactivate a configuration for purposes such as forcing the configuration to stop gathering log data.
-
Click the configuration’s row in the Log Delivery List page.The configuration is displayed.
-
Click the DEACTIVATE button at the bottom of the page.A confirmation message is displayed at the top right of the page and the button’s label changes to ACTIVATE..The configuration’s status on the Log Delivery List page changes to Deactivated.
It can take 5 to 10 minutes for a deactivation to take effect.
Activate
You can reactivate a deactivated configuration.
-
Click the configuration’s row in the Log Delivery List page.The configuration is displayed.
-
Click the ACTIVATE button at the bottom of the page.A confirmation message is displayed at the top right of the page and the button’s label changes to DEACTIVATE.The configuration’s status on the Log Delivery List page changes to the state it was in before it was deactivated.
It can take 5 to 10 minutes for an activation to take effect.
Enable Log Delivery to a Destination
Amazon S3
You can store your log files on the Amazon S3 platform. Amazon S3 is a cloud object storage service built to store and retrieve data.
Prerequisites
Before configuring Amazon S3 as a destination, you must do the following:
-
Create an S3 Identity and Access Management (IAM) user in Amazon’s configuration screens.
-
Give the IAM user the following permissions for the bucket where you want to store logs:
- ListBucket
- GetObject
- PutObject
Configuration Fields
These are visible only when you select Amazon S3 as the destination.
Field | Description |
---|---|
REGION | S3 bucket geographic area. |
BUCKET NAME | S3 bucket title. |
PATH | Path within bucket where logs are stored. Do not add a leading slash to the path. If you do, Amazon creates an object URL with a double slash. Example: >https:://bucket.s3.region.amazonaws.com//cdn\_logs... |
ACCESS KEY | Bucket access key provided by Amazon. |
SECRET KEY | Bucket secret key provided by Amazon. After you set the secret key and save the configuration, the key is not visible, but you can enter a new key if needed and save the configuration. |
Custom HTTPS Endpoint
LDS supports log data streaming to a custom HTTPS endpoint using POST requests.
Configure a Custom HTTPS endpoint as LDS destination
-
Select Custom HTTPS endpoint in the DESTINATION drop-down menu.
-
Configure the fields as described in Configuration Fields.
-
Click SAVE.
HTTPS Configuration Fields
Field | Description |
---|---|
URL | HTTPS URL that accepts POST requests. |
AUTHORIZATION HEADER VALUE | (optional) Authorization header value to use when sending logs. (e.g., Basic \<Base64 encoded username and password\> , Bearer \<Your API key\> ) |
CUSTOM HEADER NAME | (optional) Custom HTTP header name to use when sending logs. (Content-Type, Encoding, Authorization, Host are not supported). |
CUSTOM HEADER VALUE | (optional) Custom HTTP header value to use when sending logs. |
Datadog
Prerequisites
-
A Datadog account: Use an existing account or create a new one.
-
A Datadog API key: Generate via Datadog. (See Datadog’s documentation on API and Application Keys.)
Configure the Datadog Location
-
Select Datadog in the DESTINATION dropdown menu.
-
Configure the fields as described in Configuration Fields.
-
Click SAVE.
Datadog Configuration Fields
Field | Description |
---|---|
Site | Datadog site region that matches your Datadog environment |
API Key | API key associated with your Datadog account |
Service | (optional) The property to be used as the ‘service’ property of Datadog |
Tags | (optional) Comma-separated list of tag to send with logs (e.g.cdn:edgio) |
Google Cloud Storage
You can store your log files on the Google Cloud Storage platform. Google Cloud Storage is a service for storing and accessing your data on Google Cloud Platform infrastructure.
Prerequisites
Before configuring Google Cloud Storage as a destination, you must do the following:
-
Create a Google Cloud Project (GCP) or use an existing project. See Google’s Google Cloud Platform - Creating and managing projects guide for instructions.
-
Set up a GCP bucket to store your logs. You can create a new bucket or use an existing one. See Google’s Create Storage Buckets Guide for instructions..
-
Create a Google service account that will use to access your bucket. See Google’s Service accounts guide for instructions.
-
Using Google’s IAM roles for Cloud Storage, guide, grant the following roles on the bucket:
- Storage Object Creator (
storage.objectCreate
) - Storage Object Viewer (
storage.objectViewer
)
-
Add the service account as a member of the bucket you created in step 2.
-
Generate JSON access keys for the service account. See Google’s Creating service account keys guide for instructions.
Configure a Google Cloud Destination
-
Select Google Cloud Storage in the DESTINATION dropdoown.
-
Configure the fields described in Configuration Fields.
-
Click SAVE.
Configuration Fields
These are visible only when you select Google Cloud Storage as the destination. Required fields are marked with an asterisk in the Control user interface.
Field | Description |
---|---|
CLIENT EMAIL | Value of the client_email field in the JSON file associated with the Google service account you created. |
SECRET KEY | Value of the private_key field in the JSON file associated with the Google service account you created. After you set the secret key and save the configuration, the key is not visible, but you can enter a new key if needed and save the configuration. |
BUCKET NAME | Title of the Google Cloud Storage bucket you created. |
PATH | Path within the bucket where logs are stored. Defaults to an empty value. Do not add a leading slash to the path. If you do, Google Cloud Storage creates an object URL with a double slash. Example: gs://bucket_name//cdn_logs/... |
Hydrolix
You can configure LDS to stream log data to Hydrolix platform.
Token-based authentication is not currently supported.
Prerequisites
Before configuring Hydrolix as a destination, you will need to do the following on your target Hydrolix environment:
-
Create a Project/Table.
-
Create a Transform.
Configure Hydrolix as LDS Destination
-
Select Hydrolix in the DESTINATION drop-down menu.
-
Configure the fields as described in Configuration Fields.
-
Click SAVE.
Configuration Fields
Field | Description |
---|---|
STREAMING API HOSTNAME | Hostname of your Hydrolix Streaming API. This value will be used in the URL https://<hydrolix-streaming-api-hostname>/ingest/event for log ingestion. |
PROJECT NAME | Hydrolix project name to include in the x-hdx-table HTTP header. |
TABLE NAME | Hydrolix table name to include in the x-hdx-table HTTP header. |
TRANSFORM SCHEMA NAME | (optional) Hydrolix transform schema name to include in the x-hdx-transform HTTP header. |
AUTHORIZATION HEADER VALUE | Authorization header value to use when sending logs. (e.g., Basic <Base64 encoded username and password>', 'Bearer <Your API key> ) |
Origin Storage
You can store your log files on the Origin Storage platform. Origin Storage is a distributed storage service operated by Edgio.
Standard fees apply for using Origin Storage.
Prerequisites
Origin Storage must be enabled for the name selected in the SHORTNAME.
Configure the Location
-
Select Origin Storage in the DESTINATION drop-down menu.
-
Configure the fields described in Origin Storage Configuration Fields.
-
Click SAVE.If is not enabled for the selected shortname, you will see a message when you attempt to save the configuration. Contact your Account Manager to enable Origin Storage for the shortname.
Origin Storage Configuration Fields
Field | Description |
---|---|
STORAGE ACCOUNTS | The Origin Storage account where you want to store logs. By default logs are stored under the same account that owns LDS configuration |
Data Sampling
Data Sampling allows you to control the volume of delivered log data by specifying the percentage of log lines to be delivered for each status code group (e.g., 1xx, 2xx, 3xx).
-
Slide the circle to select the percentage of log volume by status code group.The valid range for sampling rates is
0
to100
, where:-
0
: all data is filtered out. -
100
: no filtering is applied (all data is delivered). -
Any value in between represents the percentage of log lines to be delivered.
-
-
The specified percentage is displayed above the circle.
Personally Identifiable Information
Edgio’s Log Delivery Service conforms to General Data Protection Regulations (GDPR) requirements.
You can configure logs to include the following fields, which contain Personally Identifiable Information (PII) :
- cs-cookie
- cs-uri
- so-src-uri
Sign PII Agreements
Per GDPR, you must explicitly indicate that you understand risks associated with the PII fields.
When you access Log Delivery Service, you will see a message that describes the risks involved.
Click the Agree button to indicate you agree.
- If you do not agree to the terms and conditions, you cannot view any configurations.
- Non-Company Admin users can sign agreements only for the company to which they belong.
- Company Admin users can sign agreements for child companies as well.
Fields
Log Delivery Service Configuration
Log Delivery Service configuration fields are attributes of a Log Delivery Service configuration and are not to be confused with log fields (see Log File Fields), which appear in log files.
Field or Section | Description |
---|---|
CONFIGURATION NAME | Customer-assigned configuration name. |
SHORTNAME | The shortname to which the configuration applies. |
SERVICE TYPE | Delivery service for which logs will be produced. (HTTP or MMD_Live_Ingest). |
Delivery Destination | See Delivery Destination Fields. |
Delivery Options | See Delivery Options Fields. |
Delivery Destination
Field or Section | Description |
---|---|
STORAGE TYPE | Log file location. Possible values:
If you change the location from Amazon to Origin Storage, you will see a message about applicable fees |
STORAGE ACCOUNT | The Origin Storage account where you want to store logs. By default logs are stored under the same account that owns the Log Delivery Service configuration. |
Delivery Options
Field | Description |
---|---|
DIRECTORY LAYOUT | The DIRECTORY LAYOUT property specifies the folder path within the destination storage where log files will be uploaded. It supports dynamic placeholders that are replaced with relevant information during file upload. Supported Dynamic Placeholders: {service_type} : Type of service for which logs are collected.{config_uuid} : UUID of the LDS configuration.{yyyy} , {MM} , {dd} : Resolves to year, month, and day respectively based on the start of the time period of the log entries covered in the file, all in UTC timezone.{yyyy_proc} , {MM_proc} , {dd_proc} : Resolves to year, month, and day respectively using the timestamp that represents the time when the file was prepared by LDS for delivery, all in UTC timezone.Default Value: {service_type}/{config_uuid}/{yyyy}/{MM}/{dd} It is not possible to combine {yyyy_proc} , {MM_proc} , {dd_proc} and {yyyy} , {MM} , {dd} variables in the directory layout. Mixing these variables in the directory structure is invalid. |
FILE NAME TEMPLATE | The FILE NAME TEMPLATE property determines the naming convention for log files uploaded to your destination storage. It supports dynamic placeholders that are resolved during file creation. Supported Dynamic Placeholders: {shortname} : The account name for which the log entries have been collected.{request_end_date_time_from} : This timestamp represents the start of the time period covered by log entries in the file, formatted as {year}{month}{day}{hour}{minute}{second} in UTC timezone.{request_end_date_time_to} : This timestamp represents the end of the time period covered by log entries in the file, formatted as {year}{month}{day}{hour}{minute}{second} in UTC timezone. The time period covered by log entries in the file is not fixed and may vary based on LDS setup and processing requirements. While currently supporting 10-minute and hourly time periods, LDS may add support for new time periods in the future. {process_window_date_time} : The timestamp when the file was prepared for delivery, formatted as {year}{month}{day}{hour}{minute}{second} in UTC timezone.{split_id} : ID assigned to the file, used for splitting large log files. When a file needs to be split to avoid exceeding the 1GB size limit, each part is given a unique split_id. The first split file is labeled as 000, and subsequent splits are numbered sequentially (001, 002, and so on). If a file does not require splitting, the split_id remains 000. Log file size is measured before compression, so a log file may be split even though it’s compressed size is smaller than 1GB. {format} : Log file format, which can be either w3c or json_lines.{compression} : File compression format.Default Value: {shortname}_{request_end_date_time_from}-{request_end_date_time_to}.{process_window_date_time}_{split_id}.{format}.{compression} |
DATA FORMAT | Log data format: W3C (tab separated), JSON lines, TSV. |
DATA COMPRESSION | File compression method. encourages you to investigate available compression methods before deciding on a method. |
Log File
HTTP
The following fields are available for you to include when you select HTTP as the SERVICE TYPE.
Field | Details | Sample Data |
---|---|---|
c-asn | (int64) The autonomous system number calculated based on client IP address. | 22822 |
c-city | (string) The City name derived from the client IP address using the IPGeo DB. | phoenix |
c-country | (string) The Country name derived from the client IP address using the IPGeo DB. | united states |
c-country-code | (string) The two-letter ISO 3166-1 alpha-2 country code derived from client IP address. | UK |
c-ip | (string) The Client IP Address (end-user). | 66.249.69.88, 2001:0db8:85a3:0000: 0000:8a2e:0370:7334 |
c-port | (string) The client remote port number used for a connection. | 80832 |
c-state | (string) The State name derived from the client IP address using the IPGeo DB. | arizona |
cs-accept-language | (string) The value of the Accept-Language request header. | en-us * de-DE,de;q=0.8,en-US;q=0.6,en;q=0.4 |
cs-cmcd | (string) The CMCD metric sent by a compatible chunk streaming media player as specified by CTA-5004 saved in query term URL-encoded format, regardless of methods used to ingest by player. | bl%3D11700%2Cbr%3D1254%2Ccid%3D%22BBBTest %22%2Cd%3D4000%2Cdl%3D11700%2Cmtp%3D33200 %2Cnor%3D%22bbb_30fps_640x360_1000k_10. m4v%22%2Cot%3Dv%2Crtp%3D2200%2Csf%3Dd% 2Csid%3D%227bf27586-2389-4c78-9c3e-401d7d23e0ef%22%2Cst%3Dv%2Ctb%3D14932 |
cs-cookie | (string) The URL-encoded cookie HTTP request header. GDPR Personally Identifiable information is included. | InfoSG=2080446124.14348.0000 |
cs-custom-header1 | (string) The value of the request header specified in the log_request_header rewrite option. You can include the value of up to five custom headers as defined as log_request_header* fields in Caching and Delivery. | 989c57423fbb |
cs-custom-header2 | (string) The value of the request header specified in the log_request_header rewrite option. You can include the value of up to five custom headers as defined as log_request_header* fields in Caching and Delivery. | 989c57423fbb |
cs-custom-header3 | (string) The value of the request header specified in the log_request_header3 rewrite option. You can include the value of up to five custom headers as defined as log_request_header* fields in Caching and Delivery. | 342912cc5c96 |
cs-custom-header4 | (string) The value of the request header specified in the log_request_header4 rewrite option. You can include the value of up to five custom headers as defined as log_request_header* fields in Caching and Delivery. | 11064983-fa8a-4e06-87c5-60124b964b33 |
cs-custom-header5 | (string) The value of the request header specified in the log_request_header5 rewrite option. You can include the value of up to five custom headers as defined as log_request_header* fields in Caching and Delivery. | Fri,%2010%20Oct%202014%2000:51:51%20GMT |
cs-headers | (string) The value of the HTTP request headers specified in the log_req_header rewrite option. These headers are logged as key-value pairs in this field. If multiple headers are specified to be logged, each key-value pair is separated by a comma. The maximum size of this field is 2048 bytes. If the maximum size is exceeded, error=toolarge is logged. | hdr1=val_1,hdr2=val%20_2 |
cs-http-proto | (string) The version of the HTTP protocol sent from the client to the server. | HTTP/1.1, HTTP/2.0 |
cs-method | (string) The HTTP request method (GET, POST, and so on) sent from the client to the server. | GET, POST, HEAD |
cs-range | (string) The value of the Range header sent from the client to the server. URL-encoded. | bytes%20567312626-1030737749/4121700402 |
cs-referer | (string) The value of the Referrer header sent from the client to the server. URL-encoded. | https://support.apple.com/en-us/HT204283 |
cs-ssl-cipher | (string) The version that the client supports, sent from the client to the server. | ECDHE-RSA-AES256-GCM-SHA384 |
cs-ssl-proto | (string) The version that the client supports, sent from the client to the server. | TLSv1.2 |
cs-uri | (string) The URL-encoded published URL that includes query strings. Includes GDPR Personally identifiable information. | http://dtvcdn11.dtvcdn.com/B003109030M3.ats? cid=003261089464&ct=1588467344 |
cs-uri-host | (string) The domain part of the Published URL. | dtvcdn11.dtvcdn.com |
cs-uri-noquery | (string) The URL-encoded published URL (query part excluded). | http://dtvcdn11.dtvcdn.com/B003109030M3.ats |
cs-user-agent | (string) The value of the User-Agent header in the request from the client to the server. URL-encoded. | DTV_VDM_0.01, appstored/1%20CFNetwork/1107.1%20Darwin/19.0.0 |
date | (string) The request end time (date part) in yyyy-MM-dd format (UTC time zone). | 2017-10-01 |
datetime | (int64) The request end time in yyyyMMddHHmmss format (UTC time zone). | 20210324151931 |
duration | (int64) The request duration in milliseconds. | 29298749 |
o-ip | (string) The IP address of the origin server that supplied the first byte of the response. Enable via the log_origin_ip_address option. | 69.179.9.82 |
s-dest-addr | (string) The IP address that the end user connects to. It is most often a virtual IP associated with a request router. In rare cases, when alternative request routing is configured, this IP address corresponds directly to a caching server. | 69.164.9.82 |
s-host | string The hostname of the server that received the request. | cds103.man.llnw.net |
s-ip | string The IP address of the edge-most server that received the request. | 69.164.9.82 |
s-pop | string The Edgio PoP name of the server that received the request. | eabc |
s-ttfb | (int32) The number of milliseconds between the CDN receiving the end-user request and writing the first byte of the response, as measured on the server. A value of 0 (zero) means the time was less than 1ms. | 56 |
sc-bytes | (int64) The number of response bytes, modified to include the packet and retransmit overhead. | 52431246 |
sc-content-length | (int64) The value of the Content-Length header in the response from the server to the client. | 4881818612 |
sc-content-type | (string) The value of the Content-Type header in the response from the server to the client. | application/octet-stream, video/x-m4v |
sc-headers | (string) The value of HTTP response headers specified in the log_resp_header rewrite option.These headers are logged as key-value pairs in this field. If multiple headers are specified to be logged, each key-value pair is separated by a comma. The maximum size of this field is 2048 bytes. If the maximum size is exceeded, error=toolarge is logged. | hdr1=val_1,hdr2=val%20_2U |
sc-request-id | string The unique ID that identifies a request (generated by the server and sent to the client in the X-LLNW-Dbg-Request-Id response debug header) | 49ae542085bb1d5b0c62a9b30c25cb7d |
sc-rexb | (int64) The number of bytes retransmitted in the response from the server to the client. | application/octet-stream, video/x-m4v |
sc-rtt | (int64) The client socket smoothed round-trip time in microseconds. | 11812 |
sc-rttv | (int64) The client socket smoothed round-trip time variance in microseconds. | 250000 |
sc-status | (string) The HTTP status code in the response from the server to the client. In addition to standard Content Delivery status codes, the sc-status field may contain non-standard status codes: - 000 - A Edgio-specific status code returned when the origin sends no response, so there is no status code to log (for example when the client disconnects before the origin delivers the response). - 600 - A Edgio-specific status code indicating the origin returned a non-HTTP-compliant response so a status code could not be obtained. For a list of standard status codes, see Response Codes in the Content Delivery User Guide. | 200, 206, 400 |
so-src-uri-noquery | (string) The URL-encoded source/ origin URL that the published URL has been mapped to (query part excluded). | http://cmdist.dtvce.com/content/B003109030M3.ats |
so-src-uri | (string) The URL-encoded source/ origin URL that the published URL has been mapped to. | http://cmdist.dtvce.com/content/B003109030M3. ats?cid=003261089464&ct=1588467344 |
time | string The request end time (time part) in HH:mm:ss.SSS format (UTC time zone). | |
x-firstnode-cached | (int32) Integer value indicating whether a cache hit occurred on the server that received the request Possible values: 0 - a cache miss occurred 1 - a cache hit occurred Customers can use the field to calculate cache efficiency in terms of requests. This field reflects a hit or miss on only the first cache node involved. It does not reflect cache hits and misses for the entire CDN. | 0 |
x-log-key-value | (string) The string representation of the key value pairs configured via the log_keyval rewrite option, the Arc Light llnw.log_keyval() builtin , and the log_keyval_header global option. This column is limited to 1024 bytes. configures the EdgePrism key-value pairs on behalf of customers. Please contact your Accouont Manager if you are interested in this feature. | dscp=34,partner=eg,proto=ssl,arclight=arc2,policyid=724 |
MMD Live Ingest
The following fields are available for you to include when you select MMD_LIVE_INGEST as the SERVICE TYPE.
Field | Details |
---|---|
audio-bytes | (int64) The number of ingested audio bytes. |
egress-bytes | (int64) The number of total possible egress bytes for all output formats. |
end-time-ms | (int64) The request end time (in milliseconds). |
frames | (int32) The number of transcoded frames. |
ingest-bytes | (int64) The number of ingested bytes. If is-transcode == 0 then total-bytes else 0. |
is-transcode | (int32) Indicates whether or not the stream is transcoded (1 - transcoding, 0 - transmuxing). |
num-output-formats | (int32) The number of output formats configured for the stream. |
slot-name | (string) The base name of the stream. |
slot-profile | (string) The name of the stream profile. |
start-time-ms | (int64) The request start time (in milliseconds). |
total-bytes | (int64) The total number of ingested bytes. |
transcode-bytes | (int64) The number of transcoded bytes. |
transcode-pixels | *(int64)*The number of transcoded pixels. |
Retrieve Log Files from Origin Storage
You can retrieve your files from Edgio Origin Storage using Origin Storage API calls in conjunction with an HTTP GET request or via the Origin Storage Management Console.
API
All methods in this section are in the Origin Storage JSON-RPC API interface. We presented essential information; for detailed information about each method, see the Origin Storage API Reference Guide.
This section describes the methods you need to download files.
Use the
login
method available in the Origin Storage JSON-RPC interface. The token string that allows you to make authenticated calls in the JSON-RPC interface. There are several methods of logging in, but we will use the simplest.login Signature
login( username, password, detail)
Parameters
username
: Your API user name.password
: Your API user name.detail
: A boolean indicating whether you want simple data or more extensive data returned.List Log Files
To list log files, call the
listFile
method available in the Origin Storage JSON-RPC interface.listFile Signature
listFile( token, dir, pageSize, cookie, stat)
Parameters
token
: The token returned from the login
call.dir
: A string representing the directory for which you want a list of files.pageSize
: A number indicating the number of results (files) to return.cookie
: A number used for making multiple listFile
calls for paginated results.stat
: A boolean whether to include file details.Obtain a Protected Download URL
To eliminate security risks, you can obtain a time-based URL to download your log files. This is the
mediaVaultUrl
method available in the Origin Storage JSON-RPC interface.First, use the
mediaVaultUrl
method to obtain a secure download URL, and then use an HTTP GET request to download.mediaVaultUrl Signature
mediaVaultUrl(token, path, expiry)
Parameters
token
: The token returned from the login call.path
: File to generate MediaVault URL.expiry
: Download URL expiry for an object in seconds.The method returns this object:
python
1{2 "code": 0,3 "download_url": "http://cs-download.limelight.com/<path to file>",4 "message": "success",5 "preview_url": "http://cs-download.limelight.com/<path to file>",6}
Do not attempt to directly download content from Origin Storage using FTP, SFTP, FTPS, SCP, or rsync because doing so can negatively impact other system processes. To download content, use an HTTP GET request.
API End-to-End Example
For simplicity, we’ve omitted error checking. The code sample uses Python.
python
1import jsonrpclib2import requests34url = 'http://{Account name}.upload.llnw.net/jsonrpc'5api = jsonrpclib.Server( url )6res = api.login(yourUser, yourPassword, True)7token = res[0]89'''10User-defined variables11'''12storage_log_dir = '/{account name}/_livelogs/'13pageSize = 10000 # page size for listing log files14files_to_download = [] # log files to download15media_vault_expiry = 60 # expiry time for mediaVaultUrl request16mv_errors = {-1: "Internal error", -2: "Path exists and is a directory", -8: "Invalid path",17 -34: "Invalid expiry", -60: "Service is disabled or unavailable", -10001: "Invalid token"}18'''19Function to examine files returned from calls to listFile20Based on a condition that you determine, you write file names to a list21of files that will later be downloaded.22This simple example looks for file names that contain the number 2.23'''24def parse_list(file_list):25 for (log_file) in file_list:26 name = log_file['name']27 if name.find('2') > -1:28 files_to_download.append(name)29 print(log_file['name'])3031'''32List Log files. This is a simplistic approach for demonstration purposes.33Customers might want to try a multi-threaded approach because the number of files can be quite large34'''35results = api.listFile(token, storage_log_dir, pageSize, 0, True)36file_list = results['list']37if len(file_list) > 0:38 parse_list(file_list)39 cookie = results['cookie']40 while cookie > 0:41 results = api.listFile(token, storage_log_dir, pageSize, cookie, True)42 file_list = results['list']43 parse_list(file_list)44 cookie = results['cookie']4546'''47Download file. This is a simplistic approach for demonstration purposes.48Customers might want to try a multi-threaded approach for a large number of files to download.49'''50for file_name in files_to_download:51 log_path = storage_log_dir + '/' + file_name52 mvu = api.mediaVaultUrl(token, log_path, media_vault_expiry)53 if mvu['code'] != 0:54 print("Error attempting to call 'mediaVaultUrl.\nCode: " + str(mvu['code']) + ": " + mv_errors[mvu['code']])55 mv_download_url = mvu['download_url']56 # Use the requests library to make the download57 response = requests.get(mv_download_url)58 # Upon non-success write a line to your errors file59 if response.status_code != 200:60 print("Unable to download " + file_name + ". Status code: " + response.status_code)
Manual Download
You can download a log file using the Origin Storage Management Console.
Begin by logging into the Edgio Control Portal, then follow these steps:
- Select “Manage”, followed by “Origin Storage Console.”
- Navigate to the folder that contains the file you want to download.
- Click the download icon. Your browser downloads the file.
Download via Python
This Python script deletes the file once it is downloaded. It also deletes the directory if it becomes empty. Use the parameter max_files to download multiple files if you don’t want the limit set to 0.
Python
1#!/usr/bin/env python2import logging, sys, csv, itertools3from multiprocessing import Pool4import requests5import time6import json7import threading89'''10Author: spandey11Unofficial Sample. Provided AS IS. WITHOUT ANY WARRANTY OR CONDITIONS.12Uses Python 313'''14LOG_FILENAME = 'LDSDownloadSession.log'15FileList = 'DLFiles.log'16logging.basicConfig(filename = LOG_FILENAME, level = logging.DEBUG, format='%(asctime)s %(levelname)s-%(filename)s:%(message)s')17logger = logging.getLogger(__name__)1819class StatError(RuntimeError):20 def __init__(self, arg):21 self.args = arg2223class ListPathError(RuntimeError):24 def __init__(self, arg):25 self.args = arg2627jsonRPC_Endpoint=''28token=''29cookie = ''30numFiles = 03132numDirs = 033totalBytes = 034oldFileList = []35theFileList = []36dirList = []37threads = []38#max files to download per session set to 0 for unlimited download39max_files = 540'''41User-defined variables42'''43storage_log_dir = '/_livelogs/http/<change with the base path>'44pageSize = 10000 # page size for listing log files45files_to_download = [] # log files to download46media_vault_expiry = 60 # expiry time for mediaVaultUrl request47mv_errors = {-1: "Internal error", -2: "Path exists and is a directory", -8: "Invalid path",48 -34: "Invalid expiry", -60: "Service is disabled or unavailable", -10001: "Invalid token"}49'''50Function to examine files returned from calls to listFile51Based on a condition that you determine, you write file names to a list52of files that will later be downloaded.53This simple example looks for file names that contain the number 2.54need to add {"method":"deleteFile","id":39,"jsonrpc":"2.0","params":{"token":"a4660cbde39c4dceb2cf796d494db3da","path":"/lll/1.mp4"}}55'''5657def parse_list(file_list):58 for log_file in file_list:59 name = log_file['name']60 if name.find('2') > -1:61 files_to_download.append(name)62 print(log_file['name'])6364def getFileListing(token, _dirname_, res):65 numDirs = len(res['dirs'])66 numFiles = len(res['files'])67 _directories_ = res['dirs']68 print ("Total directory count: " + str(numDirs))69 print ("Total file count: " + str(numFiles))70 #Delete the dir in case is empty and is not the base path71 if numDirs == 0 and numFiles == 0 and _dirname_.count('/') > 3:72 delp = '{"method":"deleteDir","id":1,"jsonrpc":"2.0","params":{"token":"'+ token +'","path":"'+ _dirname_ +'"}}'73 print("\nDeleting : "+ delp)74 delpRes = requests.post(jsonRPC_Endpoint, data=delp)75 delpRes = json.loads(delpRes.text)76 delpCode = delpRes['result']77 #print("\n\n-------------- Code: " + str(delpCode) )78 if delpCode != 0:79 print("Error attempting to call del url.\nCode: " + str(delpCode))8081 for _dir_ in _directories_:82 #print ("Scanning Directory: " + _dir_['name'] + " for dirs")83 dirName = _dirname_ + '/' + _dir_['name']84 listPath(token, dirName)85 # Listing Files86 file_listing = res['files']87 conteggio = 088 for file in file_listing:89 '''90 Download file. This is a single-threaded approach for simple use & demonstration purposes.91 Customers might want to try a multi-threaded approach for a large number of files to download.92 '''93 conteggio += 194 log_path = _dirname_+ '/' + file['name']95 #print("\nstarting download of: "+ log_path)96 mvu = '{"method": "mediaVaultUrl", "id": 1, "params": {"token":"'+ token +'", "path": "'+ log_path +'", "expiry": '+str(media_vault_expiry)+'}, "jsonrpc": "2.0"}'97 mvuRes = requests.post(jsonRPC_Endpoint, data=mvu)98 mvuRes = json.loads(mvuRes.text)99 #print("==== Printing mediaVaultUrl response ====\n")100 #print(mvuRes)101 code = mvuRes['result']['code']102 if code !=0:103 print("Error attempting to call 'mediaVaultUrl.\nCode: " + str(code) + ": " + mv_errors[code])104 else:105 mv_download_url = mvuRes['result']['download_url']106 # grab the name of the file to write from mv url107 lds_file_name = mv_download_url.rsplit("/",1)[1].split("?")[0]108 print(mv_download_url, '\nFilename:'+lds_file_name)109 with open(lds_file_name, "wb") as f:110 # Use the requests library to make the download111 response = requests.get(mv_download_url, stream=True)112 # check & show download progress113 total_length = response.headers.get('content-length')114 if total_length is None: # no content length header115 print("no content-length header found")116 f.write(response.content)117 else:118 dl = 0119 total_length = int(total_length)120 for data in response.iter_content(chunk_size=4096):121 dl += len(data)122 f.write(data)123 done = int(50 * dl / total_length)124 sys.stdout.write("\r[%s%s]" % ('|' * done, ' ' * (50-done)) )125 sys.stdout.flush()126 #Delete the file just downloaded127 delu = '{"method":"deleteFile","id":1,"jsonrpc":"2.0","params":{"token":"'+ token +'","path":"'+ log_path +'"}}'128 print("\nDeleting : "+ delu)129 deluRes = requests.post(jsonRPC_Endpoint, data=delu)130 deluRes = json.loads(deluRes.text)131 #delcode = deluRes['result']['code']132 #print("\n\n-------------- Code: " + str(delcode) + ": " + mv_errors[delcode])133 #if delcode !=0:134 # print("Error attempting to call del url.\nCode: " + str(delcode) + ": " + mv_errors[delcode])135 # Upon non-success write a line to your errors file136 if response.status_code != 200:137 print("Unable to download " + file['name'] + ". Status code: " + response.status_code)138 if conteggio == max_files:139 break # break here140141# def writeToFile(filename, data):142# with open(filename, 'w') as f:143# i = 0144# for i in range(len(data)):145# print(data[i]+'\n', end="", file=f)146147def listPath(token, _dirname_):148 '''149 Listing path recursively150 '''151 try:152 # Scan through parent directory for the files and sub-dirs153 listpathdata='{"method": "listPath","id": 1,"params": {"token": "'+token+'","path": "'+_dirname_+'","pageSize": '+str(pageSize)+',"cookie": "'+cookie+'","stat": true},"jsonrpc": "2.0"}'154 # print('===List Path Data - Parent==\n', listpathdata)155 res = requests.post(jsonRPC_Endpoint, data=listpathdata)156 res = json.loads(res.text)157 print ('======Listing Path for: '+_dirname_)158 code = res['result']['code']159160 if code !=0:161 msg = 'Error issuing listPath command on directory: ' + _dirname_162 msg += '\n Return code: ' + str( code )163 msg += '\n See API documentation for details.'164 logger.error('ListPathError' + msg)165 raise ListPathError( msg )166167 theFileList = getFileListing(token,_dirname_,res['result'])168169 except ListPathError as e:170 print (''.join( e.args ))171172 except StatError as e:173 print (''.join( e.args ))174175def main(host, username, password):176 global jsonRPC_Endpoint177 jsonRPC_Endpoint = host178 try:179 # Obtain token180 loginData='{"method": "login","id": 0,"params": {"username": "'+username+'","password":"'+password+'","detail": true},"jsonrpc": "2.0"}'181 login = requests.post(jsonRPC_Endpoint, data=loginData)182 print (login.reason, login.headers)183 resp = json.loads(login.text)184 token = resp['result'][0]185 print ('=======Token & User======\n',token)186 logger.debug('Logged In. Token Is: '+ token)187188 # Persist Token for the session until the logout or the end of time defined by 'expire'189 persistData='{"method": "updateSession","id": 4,"params": {"token": "'+token+'", "expire": 0},"jsonrpc": "2.0"}'190 persist = requests.post(jsonRPC_Endpoint, data=persistData)191 persist = json.loads(persist.text)192 #print('=======Updated Token======\n',persist)193 if persist['result'] == 0:194 # print('Token Persisted!')195 logger.debug('Token Persisted!. Token Is: '+ token)196197 # call listPath method on storage198 listPath(token, storage_log_dir)199 except Exception as e:200 print(''.join( e.args ))201 logger.error('Error Occured While Logging')202 finally:203 #print ('\nLogging out...\n')204 logoutData = '{"method": "logout","id": 1,"params": {"token": "'+token+'"},"jsonrpc": "2.0"}'205 logoutRes = requests.post(jsonRPC_Endpoint, data=logoutData)206 logoutRes = json.loads(logoutRes.text)207 #print ('Logout results: ', logoutRes)208 if logoutRes['result'] == 0:209 logger.debug('Logged Out!')210211if __name__ == "__main__":212 main('https://<shortname>-l.upload.llnw.net/jsonrpc2', '<user-vs>',"<password-vs>")