Records that can’t be successfully written to S3 are written to a second Kinesis stream with the error message.
There are 2 file formats supported:
Records are treated as raw byte arrays. Elephant Bird’s
BinaryBlockWriter class is used to serialize them as a Protocol Buffers array (so it is clear where one record ends and the next begins) before compressing them.
The compression process generates both compressed .lzo files and small .lzo.index files (splittable LZO). Each index file contain the byte offsets of the LZO blocks in the corresponding compressed file, meaning that the blocks can be processed in parallel.
The records are treated as byte arrays containing UTF-8 encoded strings (whether CSV, JSON or TSV). New lines are used to separate records written to a file. This format can be used with the Snowplow Kinesis Enriched stream, among other streams.
S3 loader requires one configuration file whose example can be found here and whose details can be found below on this page.
2.1. Docker image
docker run \ -d \ --name snowplow-s3-loader \ --restart always \ --log-driver awslogs \ --log-opt awslogs-group=snowplow-s3-loader \ --log-opt awslogs-stream=`ec2metadata --instance-id` \ --network host \ -v $(pwd):/snowplow/config \ -e 'JAVA_OPTS=-Xms512M -Xmx1024M -Dorg.slf4j.simpleLogger.defaultLogLevel=WARN' \ snowplow/snowplow-s3-loader:1.0.0 \ --config /snowplow/config/config.hocon
Code language: CSS (css)
java -jar snowplow-s3-loader-1.0.0.jar --config config.hocon
JAR can be found attached to the Github release.
Running the jar requires to have the native LZO binaries installed. For example for Debian this can be done with:
sudo apt-get install lzop liblzo2-dev
The sink is configured using a HOCON file. These are the fields:
source: Choose kinesis or nsq as a source stream
sink: Choose between kinesis or nsq as a sink stream for failed events
aws.secretKey: Change these to your AWS credentials. You can alternatively leave them as “default”, in which case the DefaultAWSCredentialsProviderChain will be used.
kinesis.initialPosition: Where to start reading from the stream the first time the app is run. “TRIM_HORIZON” for as far back as possible, “LATEST” for as recent as possibly, “AT_TIMESTAMP” for after the specified timestamp.
kinesis.initialTimestamp: Timestamp for “AT_TIMESTAMP” initial position
kinesis.maxRecords: Maximum number of records to read per GetRecords call
kinesis.region: The Kinesis region name to use.
kinesis.appName: Unique identifier for the app which ensures that if it is stopped and restarted, it will restart at the correct location.
kinesis.customEndpoint: Optional endpoint url configuration to override aws kinesis endpoints. This can be used to specify local endpoints when using localstack.
kinesis.disableCloudWatch: Optional override to disable CloudWatch metrics for KCL
nsq.channelName: Channel name for NSQ source stream. If more than one application reading from the same NSQ topic at the same time, all of them must have unique channel name to be able to get all the data from the same topic.
nsq.host: Hostname for NSQ tools
nsq.port: HTTP port number for nsqd
nsq.lookupPort: HTTP port number for nsqlookupd
stream.inStreamName: The name of the input stream of the tool which you choose as a source. This should be the stream to which your are writing records with the Scala Stream Collector.
streams.outStreamName: The name of the output stream of the tool which you choose as sink. This is stream where records are sent if the compression process fails.
streams.buffer.byteLimit: Whenever the total size of the buffered records exceeds this number, they will all be sent to S3.
streams.buffer.recordLimit: Whenever the total number of buffered records exceeds this number, they will all be sent to S3.
streams.buffer.timeLimit: If this length of time passes without the buffer being flushed, the buffer will be flushed. Note: With NSQ streams, only record limit is taken into account. Other two option will be ignored.
s3.region: The AWS region for the S3 bucket
s3.bucket: The name of the S3 bucket in which files are to be stored
s3.format: The format the app should write to S3 in (
s3.maxTimeout: The maximum amount of time the app attempts to PUT to S3 before it will kill itself
It’s possible to include Snowplow monitoring in the application. This is setup through the
monitoring section at the bottom of the config file:
monitoring.snowplow.collectorUriyour snowplow collector URI
monitoring.snowplow.appIdthe app-id used in decorating the events sent
To disable Snowplow monitoring, just remove the entire
monitoring section from the config.