Full Job Description
- Design, Develop, and Test streaming pipeline for Freddie Data Platform utilizing Confluent Kafka
- Develop data integration/ engineering workflows on Confluent Cloud and AWS platforms
- Develop best practices for data integration/ streaming
- Leverage Microservices for aggregating data streams/ pipelines where needed
- Hands on experience with AWS services such as Glue, EMR, Lambda, Step Functions, CloudTrail, CloudWatch, SNS, SQS, S3, VPC, EC2, RDS, IAM
- Knowledge of application development lifecycles, & continuous integration/deployment practices
- Experience with development on AWS and cloud native applications
- Experience with S3, DynamoDB, Kinesis and Snowflake
- Experience with Developing Data Platforms on AWS
- Proficient in Agile Software Development methodology, processes, and practices.
- Bachelor’s degree in related field
- 3+ years of Kafka streaming exp required
- Strong, analytical, focused, self-driven with excellent problem-solving skills
- Minimum 5 years of hands on in-depth experience of AWS Data integration/engineering workflows development and implementation is required
- AWS Associate or Professional level certifications is a plus
H1B Candidates are fine.
Must Haves are hands on experience with Kafka – not just on the consumption side, but on the producer and consumer side. In addition to Kafka streaming, but Should have hands on experience with Kafka APIs. Need to have 5 years of hands on experience with AWS – not just leveraging or working in an AWS environment. Must also have Java or Scala experience – looking for 8-10 years of combined coding experience, they need to have 8/10 level experience with one of the skills. Experience with DataBricks would be a big plus.
Very occasional additional hours may be needed, but it should be a standard 40 hour.
If you require alternative methods of application or screening, you must approach the employer directly to request this as H1BVisaJobs is not responsible for the employer’s application process.