Data Engineer, Specialist
Malvern
Saturday, 02 May 2026
Provides expert level data solutions by using software to process, store, and serve data to others. Tests data quality and optimizes data availability. Ensures that data pipelines are scalable, repeatable, and secure. Utilizes the deepest dive analytical skillset on a variety of internal and external data. Leads, instructs, and mentors newer Data Engineering crew. Role Summary. The Data Engineer, Senior Specialist is responsible for designing, developing and optimizing data pipelines and architectures to support scalable and efficient data processing. This role involves working with complex datasets, integrating multiple data sources and ensuring high data quality and reliability. Responsibilites 1. Develop and maintain efficient ETL (Extract, Transform, Load) pipelines that ingest, process and transform large datasets from diverse sources .. Build and manage data storage solutions using relational and non-relational databases, ensuring high availability, performance and scalability .. Work with APIs, third-party data sources and internal databases to create unified datasets that provide actionable insights for business stakeholders .. Partner with business intelligence analysts, data scientists and software engineers to design and implement data-driven solutions that meet business requirements .. Design optimized data schemas and structures that support analytical workloads and improve query performance .. Leverage orchestration tools like Apache Airflow or Prefect to automate ETL workflows and improve operational efficiency .. Provide technical guidance, review code and promote best practices within the data engineering team. Qualifications and Skills. Minimum of eight years data analytics, programming, database administration, or data management experience. Undergraduate degree or equivalent combination of training and experience. Graduate degree preferred. Data modeling experience is required. Proficiency in SQL and database technologies such as Postgre. SQL, MySQL, Oracle, or SQL Server. Hands-on experience with big data frameworks like Apache Spark, Hadoop, or Databricks. Strong programming skills in Python, Scala, or Java for data processing and automation. Experience with cloud platforms such as AWS, Azure, or GCP, particularly in data services like AWS Glue, Redshift, Big Query, or Snowflake.