Velocity Staff

Senior Data Engineer - Data Streaming

Location US-Kansas City
Posted Date 4 months ago(5/24/2023 2:18 PM)
# Positions


Velocity Staff, Inc. is currently working with our client located in Overland Park, Kansas to identify a Data Engineer to join their team for a full-time, permanent role. The Data Engineer will utilize their expertise in data warehousing, data streaming, data pipeline creation/support and analytical reporting skills to gather and analyze data from several internal and external sources as well as design a cloud-focused data platform for analytics and business intelligence and reliably provide data to analysts.


  • Work with Data Architects to understand current data models, to build pipelines for data ingestion and transformation.
  • Design, build, and maintain a framework for pipeline observation and monitoring, focusing on reliability and performance of jobs.
  • Surface data integration errors to the proper teams, ensuring timely processing of new data.
  • Provide technical consultation for other team members on best practices for automation, monitoring, and deployments.
  • Provide technical consultation for the team with “infrastructure as code” best practices: building deployment processes utilizing technologies such as Terraform or AWS Cloud Formation.


  • This role requires significant understanding of data mining, data streaming and analytical techniques. An ideal candidate will have strong technical capabilities, business acumen, and the ability to work effectively with cross-functional teams.
  • Bachelor’s Degree in computer science, data science or related technical field, or equivalent practical experience.
  • Proven experience with relational and NoSQL databases (e.g. Postgres, Redshift, MongoDB, Elasticsearch.)
  • Experience building and maintaining AWS based data pipelines: Technologies currently utilized include AWS Lambda, Docker / ECS, MSK.
  • Mid/Senior level development utilizing Python: (Pandas/Numpy, Boto3, SimpleSalesforce.)
  • Experience with version control (git) and peer code reviews.
  • Enthusiasm for working directly with customer teams (Business units and internal IT.)
  • Preferred but not required qualifications include:
    • Experience with data processing and analytics using AWS Glue or Apache Spark.
    • Hands-on experience building data-lake style infrastructures using streaming data set technologies (particularly with Apache Kafka.)
    • Experience data processing using Parquet and Avro.
    • Experience developing, maintaining, and deploying Python packages.
    • Experience with Kafka and the Kafka Connect ecosystem.
    • Experience with Databricks
    • Familiarity with data visualization techniques using tools such as Grafana, PowerBI, AWS Quick Sight, and Excel.


Sorry the Share function is not working properly at this moment. Please refresh the page and try again later.
Share on your newsfeed

Connect With Us And We Promise Not To Overwhelm Your Inbox!

Not ready to apply? Connect with us to learn about future opportunities.