Velocity Staff

Senior Data Engineer

Location US-KS-Overland Park
Posted Date 5 days ago(11/30/2025 10:33 AM)
# Positions
1

Overview

Velocity Staff, Inc. is working with our client located in the Overland Park, KS area to identify a Senior Level Data Engineer to join their Data Services Team. The right candidate will utilize their expertise in data warehousing, data pipeline creation/support and analytical reporting and be responsible for gathering and analyzing data from several internal and external sources, designing a cloud-focused data platform for analytics and business intelligence, reliably providing data to our analysts. This role requires significant understanding of data mining and analytical techniques. An ideal candidate will have strong technical capabilities, business acumen, and the ability to work effectively with cross-functional teams.

Responsibilities

  • Work with Data architects to understand current data models, to build pipelines for data ingestion and transformation.
  • Design, build, and maintain a framework for pipeline observation and monitoring, focusing on reliability and performance of jobs.
  • Surface data integration errors to the proper teams, ensuring timely processing of new data.
  • Provide technical consultation for other team members on best practices for automation, monitoring, and deployments. 
  • Provide technical consultation for the team with “infrastructure as code” best practices: building deployment processes utilizing technologies such as Terraform or AWS Cloud Formation.

Qualifications

  • Bachelor’s degree in computer science, data science or related technical field, or equivalent practical experience
  • Proven experience with relational and NoSQL databases (e.g. Postgres, Redshift, MongoDB, Elasticsearch)
  • Experience building and maintaining AWS based data pipelines: Technologies currently utilized include AWS Lambda, Docker / ECS, MSK 
  • Mid/Senior level development utilizing Python: (Pandas/Numpy, Boto3, SimpleSalesforce)
  • Experience with version control (git) and peer code reviews
  • Enthusiasm for working directly with customer teams (Business units and internal IT)
  • Preferred but not required qualifications include: 
    • Experience with data processing and analytics using AWS Glue or Apache Spark
    • Hands-on experience building data-lake style infrastructures using streaming data set technologies (particularly with Apache Kafka)
    • Experience data processing using Parquet and Avro
    • Experience developing, maintaining, and deploying Python packages
    • Experience with Kafka and the Kafka Connect ecosystem.
    • Familiarity with data visualization techniques using tools such as Grafana, PowerBI, AWS Quick Sight, and Excel.   

Options

Sorry the Share function is not working properly at this moment. Please refresh the page and try again later.
Share on your newsfeed

Connect With Us And We Promise Not To Overwhelm Your Inbox!

Not ready to apply? Connect with us to learn about future opportunities.