208 Datastage Etl jobs in Ireland
Data Engineer
Posted today
Job Viewed
Job Description
The Company:
From our roots in Ireland, CarTrawler has grown into the leading B2B technology provider of car rental and mobility solutions to the global travel industry. If you've ever booked a flight and seen the option to rent a car, that was probably us; but it's our people that make everything we do possible – and we're growing
At CarTrawler, you'll find more than just a job. You'll find flexibility, meaningful impact, and a culture built by the people who live it every day. Our culture is built on high performance, genuine connection, and a shared commitment to making an impact, without sacrificing personal wellbeing. With flexible working models, meaningful time off, and dedicated growth opportunities, we enable people to do great work and feel good doing it.
We have a hybrid working policy with two mandatory days a week in our Dublin office, you have the freedom to design a routine that supports your productivity and personal life. The office offers ample car parking, a heavily subsidized (KC Peaches) canteen, convenient proximity to the Luas, and access to EV charging stations.
Role Purpose:
We are seeking a Data Engineer on a 6 month fixed term contract to develop and maintain our Snowflake data warehouse and data marts, designing and optimizing ETL processes and data models to ensure accuracy and scalability. The role requires strong skills in SQL, Python, and stored procedures, with hands-on use of Snowflake, Airflow MWAA, Soda, and DBT. Working closely with Data Engineering and wider P&T teams, you will build secure, high-performing data solutions that support business-critical initiatives.
Responsibilities & Accountabilities
- Build & Optimize Data Pipelines: Design, construct, and maintain robust, scalable ETL/ELT pipelines using dbt and Airflow to integrate new data sources and manage changes to SQL jobs.
- Troubleshooting issues with existing SQL/ETL processes and data loads to drive to solution.
- Design and build extensible data models and integration solutions using various tools such as Snowflake functionality, Airflow, Soda, DBT, AWS s3.
- Implement and enforce best practices for data quality, testing, and documentation to ensure data is accurate, consistent, and trustworthy.
- Continuously optimize our Snowflake data warehouse, refining data models and architecture to improve query performance, scalability, and cost-efficiency.
Skills & Experience Required
- 3+ years of experience in a Data Engineering or similar role.
- Hands-on experience with building data pipelines, data models and implementing data quality.
- Experience accessing and manipulating structured and semi-structured data(JSON) in various cloud data environments. Any of the following technologies Snowflake, Redshift, Hadoop, Spark, Cloud Storage, AWS, consuming data from APIs.
- Expert proficiency in Python(or scala) for data manipulation and scripting, advanced SQL and database stored procedures for complex querying and data modelling.
- Experience with orchestration and monitoring tools such as Airflow
- Solid understanding of ETL/ELT design patterns, data modelling principles and database architecture.
- Experience of full stack development, implementing continuous integration and automated tests e.g. GitHub, Jenkins
- Proven ability to work creatively and analytically in a fast-paced, problem-solving environment.
- Excellent communication (verbal and written) and interpersonal skills.
- Proven ability to communicate complex analysis in a clear, precise, and actionable manner
Research shows that individuals from underrepresented backgrounds often hesitate to apply for roles unless they meet every single qualification, while others may apply when they meet only a portion of the criteria. If you believe you have the skills and potential to succeed in this role, even if you don't meet every listed requirement, we encourage you to apply. We'd love to hear from you and explore whether you could be a great fit.
Data Engineer
Posted today
Job Viewed
Job Description
Data Engineer – Dublin OR London (Hybrid)
Permanent, Full-time Role
€90,000 / £78,000 (approx.) + Benefits
Overview
We are seeking a skilled Data Engineer to join our client's product engineering team, supporting Business Intelligence and Data Science initiatives primarily using AWS technologies. You will collaborate closely with their corporate technology team to build, automate, and maintain AWS infrastructure.
This role requires a highly competent, detail-oriented individual who stays current with evolving data engineering technologies.
Key Responsibilities
- Collaborate with data consumers, producers, and compliance teams to define requirements for data solutions with executive-level impact.
- Design, build, and maintain solutions for Business Intelligence and Data Science on AWS, including:
- Data ingestion pipelines
- ETL/ELT processes (batch and streaming)
- Curated data products
- Integrations with third-party tools
- Support and enhance the data lake, enterprise data catalog, cloud data warehouse, and data processing infrastructure.
- Provision and manage AWS services and infrastructure as code using Amazon CDK.
- Provide input on product/vendor selection, technology strategies, and architectural design.
- Identify and implement improvements to reduce waste, complexity, and redundancy.
- Manage workload efficiently to meet service levels and KPIs.
- Execute incident, problem, and change management processes as required.
Qualifications
- Degree in Information Systems, Computer Science, Statistics, or a related quantitative field.
- 3+ years experience with Spark in production environments, handling batch and stream processing jobs. Exposure to Ray is advantageous.
- 3+ years experience with cloud data warehousing tools such as Snowflake, Redshift, BigQuery, or ClickHouse.
- Expert SQL skills, with exposure to HiveQL.
- Proficiency in Java, Scala, Python, and Typescript programming.
- Strong understanding of AWS security mechanisms, particularly relating to S3, Kinesis, EMR, Glue, and LakeFormation.
- Experience with GitHub, DataDog, and AWS.
- Proven ability to learn and apply open-source tools independently.
- Strong ownership mindset and proactive approach.
Data Engineer
Posted today
Job Viewed
Job Description
We are seeking a highly skilled
Azure Data Engineer with strong experience in Dynamics 365
to join our team and support the delivery of our Dynamics 365 implementation. The ideal candidate will have extensive experience in Azure and Azure Data Factory and a strong background in integrating systems using various technologies and patterns. This role involves creating API interfaces using Azure Integration Services and integrating with Dynamics 365, leveraging Dataflows, ODATA, and other patterns.
Key Responsibilities:
- Design and implement scalable ETL/ELT pipelines using Azure Data Factory (ADF), including mapping data flows and parameterized pipelines.
- Integrate with Microsoft Dataverse and Dynamics 365 using ADF's native connectors, OData endpoints, and REST APIs.
- Collaborate with cross-functional teams to understand integration requirements and deliver robust solutions.
- Develop and optimise dataflows for transformation logic, including column reduction, lookups, and upsert strategies
- Implement delta load strategies and batch endpoints to manage large-scale data synchronisation efficiently
- Ensure data consistency and integrity across systems by leveraging unique identifiers and business keys.
- Collaborate with business analysts and architects to translate business requirements into technical solutions.
- Monitor, troubleshoot, and optimise pipeline performance using ADF's built-in diagnostics and logging tools.
- Contribute to data governance, security, and compliance by implementing best practices in access control and data handling.
- Collaborate on the development of new standards and practices to improve the quality, capability, and velocity of the team.
- Mentoring and coaching of junior team members new to Azure to ensure that the team builds up a strong capability
Requirements:
- ADF Expertise. Proficient in building pipelines, triggers, and mapping dataflows. Experience with parameterisation, error handling, and pipeline orchestration
- OData & API Integration. Strong understanding of the OData protocol. Experience integrating with REST APIs
- Dataverse & Dynamics 365. Hands-on experience with Dataverse schema, connector configuration, and data model mapping. Familiarity with D365 entity relationships
- Data Modelling. Understanding of conceptual, logical, and physical data models.
- SQL & Scripting. Strong SQL skills for data extraction and transformation
- Testing & Automation. Experience with end-to-end testing of data pipelines and automation of data validation processes
- Documentation & Collaboration. Ability to document data flows, transformation logic, and integration patterns. Strong communication skills for cross-functional collaboration.
- Azure Developer certification is highly desirable.
- Excellent problem-solving skills and the ability to work collaboratively in a team environment.
- Dynamics 365 experience is a must
Data Engineer
Posted today
Job Viewed
Job Description
Role Description
The Data Engineer is responsible for designing, building, and maintaining scalable data pipelines and architectures that enable efficient data collection, processing, and analysis. This role ensures that high-quality, reliable data is available to support business intelligence, analytics, and machine learning initiatives. The ideal candidate is technically strong, detail-oriented, and passionate about building robust data systems that transform raw data into actionable insights.
Key Responsibilities
- Design, develop, and optimize data pipelines, ETL/ELT processes, and workflows for structured and unstructured data.
- Build and maintain scalable data architectures that support data warehousing, analytics, and reporting needs.
- Integrate data from multiple sources such as APIs, databases, and third-party systems into centralized data platforms.
- Collaborate with data analysts, data scientists, and business teams to understand data requirements and ensure data accuracy and availability.
- Develop and enforce best practices for data governance, security, and quality assurance.
- Monitor, troubleshoot, and optimize data processes for performance and cost efficiency.
- Implement data validation, cleansing, and transformation procedures to maintain data integrity.
- Work with cloud platforms (e.g., AWS, Azure, GCP) to manage data storage, orchestration, and automation tools.
- Create and maintain documentation for data models, data flow diagrams, and pipeline configurations.
- Support the development of analytics and machine learning pipelines by providing clean and well-structured datasets.
- Collaborate with DevOps teams to deploy, scale, and maintain data infrastructure in production environments.
- Continuously improve data engineering practices through automation, monitoring, and innovation.
- Stay updated on emerging technologies and trends in data architecture, big data, and cloud computing.
Qualifications
- Bachelor's or Master's degree in Computer Science, Information Systems, Data Engineering, or a related field.
- 2–5 years of experience in data engineering, data warehousing, or database development.
- Strong proficiency in SQL and at least one programming language (Python, Java, or Scala preferred).
- Hands-on experience with ETL tools and frameworks (e.g., Apache Airflow, dbt, or Talend).
- Experience with big data technologies such as Spark, Hadoop, or Kafka.
- Familiarity with cloud-based data services (AWS Redshift, Google BigQuery, Azure Synapse, or Snowflake).
- Solid understanding of data modeling, schema design, and database management (relational and NoSQL).
- Knowledge of APIs, data integration, and data streaming methodologies.
- Strong problem-solving, analytical, and debugging skills.
- Excellent collaboration and communication abilities to work cross-functionally.
- Experience with containerization tools (Docker, Kubernetes) and CI/CD pipelines is a plus.
- Commitment to building efficient, scalable, and reliable data systems that support business growth.
Data Engineer
Posted today
Job Viewed
Job Description
Data Engineer - Flexible Working
Permanent | Hybrid | Dublin
€50,000 - €0,000 DOE
TechHeads is excited to bring you a new opportunity for a Data Engineer to join a growing team within a forward-thinking organisation that values collaboration, innovation, and continuous improvement.
In this position, you will be responsible for gathering requirements from stakeholders and building complex and scalable reports that will be used by users throughout the organisation. You will work with modern tools such as, Power BI, DAX, Azure and more, allowing you to develop your technical skillset with industry relevant tech.
This fulltime role, based in Dublin, will give you the opportunity to join an employee focused organisation. They support a flexible working model of 2 days onsite as well as a culture of internal progression, offering you excellent work-life balance and growth potential.
If you're looking for an impactful role where you can work with flexibility and modern technologies, this role is for you
Required:
- 2+ years experience developing interactive dashboards and reports in Power BI.
- 2+ years experience working SQL.
- Experience working with DAX for creating measures, calculated columns, and complex business logic.
- Experience with ETL or any Data Engineering related activities.
- Experience implementing security and RLS in Power BI and Azure.
- Strong analytical and problem-solving skills.
Salary: ,000 - ,000 DOE
Benefits: Pension, Flexible Working and more
If you would like to be considered for this position, please share a copy of your updated CV to
Data Engineer
Posted today
Job Viewed
Job Description
Data Engineer – 6 Month Contract, Hybrid, Dublin
What's Involved
- Build and maintain data pipelines using Snowflake.
- Optimise Snowflake architecture and models for performance and scalability.
- Translate business requirements into technical solutions in collaboration with analysts and architects.
- Implement data governance, access control, and security best practices.
- Monitor, troubleshoot, and fine-tune pipeline performance using ADF and Snowflake tools.
- Document data flows, transformations, and integrations clearly and consistently.
- Mentor and support junior engineers in data engineering best practices.
What's Needed
- 7+ years' experience in data engineering or integration roles.
- Strong Snowflake and SQL Server skills for data design and optimisation.
- Proven ability with ADF pipelines, including parameterisation and error handling.
- Solid grasp of data modelling and modern data-warehouse design.
- Experience with data testing, automation, and validation.
- Strong collaboration and documentation skills.
- Snowflake or Azure Data Engineer certification preferred.
Data Engineer
Posted today
Job Viewed
Job Description
Data Engineer
Permanent
Dublin/Hybrid
Requirements
We are looking for an innovative data engineer who will lead the technical design and development of an Analytic Foundation. The Analytic Foundation is a suite of individually commercialized analytical capabilities that also includes a comprehensive data platform. These services will be offered through a series of APIs that deliver data and insights from various points along a central data store. This individual will partner closely with other areas of the business to build and enhance solutions that drive value for our customers.
Position Responsibilities:
As a Data Engineer within Advanced Analytics team, you will:
• Play a large role in the implementation of complex features
• Push the boundaries of analytics and powerful, scalable applications
• Build and maintain analytics and data models to enable performant and scalable products
• Ensure a high-quality code base by writing and reviewing performant, well-tested code
• Mentor junior engineers and teammates
• Drive innovative improvements to team development processes
• Partner with Product Managers and Customer Experience Designers to develop a deep understanding of users and use cases and apply that knowledge to scoping and building new modules and features
• Collaborate across teams with exceptional peers who are passionate about
what they do
Ideal Candidate Qualifications:
• 4+ years of full stack engineering experience in an agile production environment
• Experience leading the design and implementation of large, complex features in full-stack applications
• Ability to easily move between business, data management, and technical teams; ability to quickly intuit the business use case and identify technical solutions to enable it
• Experience leveraging open source tools, predictive analytics, machine learning, Advanced Statistics, and other data techniques to perform analyses
• High proficiency in using Python or Scala, Spark, Hadoop platforms & tools (Hive, Impala, Airflow, NiFi, Scoop), SQL to build Big Data products & platforms
• Experience in building and deploying production-level data-driven applications and data processing workflows/pipelines and/or implementing machine learning systems at scale in Java, Scala, or Python and deliver analytics involving all phases like data ingestion, feature engineering, modeling, tuning, evaluating, monitoring, and presenting
• Experience in cloud technologies like Databricks/AWS/Azure
• Strong technologist with proven track record of learning new technologies and frameworks
• Customer-centric development approach
• Passion for analytical / quantitative problem solving
• Experience identifying and implementing technical improvements to development processes
• Collaboration skills with experience working with people across roles and geographies
• Motivation, creativity, self-direction, and desire to thrive on small project teams
• Superior academic record with a degree in Computer Science or related technical field
• Strong written and verbal English communication skills
Be The First To Know
About the latest Datastage etl Jobs in Ireland !
Data Engineer
Posted today
Job Viewed
Job Description
Data Engineer
Dublin | Hybrid
€60,000 - €65,000 per annum
Is this the Data Engineer role for you?
Crone Corkill are assisting a consultancy as they look to add a Data Engineer to the team in order to work with one of their premier clients on a permanent basis.
Working with a newly formed analytics team, you'll play a key part in the technical design and development of the foundation, with the analytical capabilities also including their data platform. This will be offered through APIs that deliver data/insights from points across the data store, and you'll partner with cross-functional teams.
What will you do as a Data Engineer?
- Build and maintain analytics and data models to enable performant and scalable products
- Ensure a high-quality code base by writing and reviewing well tested code
- Partner with Product Managers and Designers to develop a deeper understanding of users & use cases
- Applying use case knowledge to scope and build new modules/features
What skills do you need as a Data Engineer?
- Full Stack engineering within an Agile production environment experience
- Building & deploying production level data driven applications and data processing workflows at scale via Python, Java, Scala etc
- Delivering data ingestion, feature engineering, modelling and tuning analytics
- Python or Scala expertise
- Hadoop platforms and tools, such as Hive, Impala, Airflow, NiFi and Scoop
- SQL to build Big Data products/platforms
- Experience with the likes of Databricks, AWS or Azure
- Leveraging open source tools, predictive analytics and ML
Data Engineer
Posted today
Job Viewed
Job Description
Job Title: Data Engineer
Job Type: Permanent
Job Location: Dublin , Ireland
Job Description:
Mandatory skills :
- High proficiency in using Python or Scala, Spark, Hadoop platforms & tools (Hive, Impala, Airflow, NiFi, Scoop), SQL to build Big Data products & platform.
- Experience in cloud technologies like Databricks/AWS/Azure.
- Experience in building and deploying production-level data-driven applications and data processing workflows/pipelines and/or implementing machine learning systems at scale in Java, Scala, or Python and deliver analytics involving all phases like data ingestion, feature engineering, modeling, tuning, evaluating, monitoring, and presenting
Requirements
We are looking for an innovative data engineer who will lead the technical design and development of an Analytic Foundation. The Analytic Foundation is a suite of individually commercialized analytical capabilities that also includes a comprehensive data platform. These services will be offered through a series of APIs that deliver data and insights from various points along a central data store. This individual will partner closely with other areas of the business to build and enhance solutions that drive value for our customers.
Position Responsibilities:
As a Data Engineer within Advanced Analytics team, you will:
• Play a large role in the implementation of complex features
• Push the boundaries of analytics and powerful, scalable applications
• Build and maintain analytics and data models to enable performant and scalable products
• Ensure a high-quality code base by writing and reviewing performant, well-tested code
• Mentor junior engineers and teammates
• Drive innovative improvements to team development processes
• Partner with Product Managers and Customer Experience Designers to develop a deep understanding of users and use cases and apply that knowledge to scoping and building new modules and features
• Collaborate across teams with exceptional peers who are passionate about
what they do
Ideal Candidate Qualifications:
• full stack engineering experience in an agile production environment
• Experience leading the design and implementation of large, complex features in full-stack applications
• Ability to easily move between business, data management, and technical teams; ability to quickly intuit the business use case and identify technical solutions to enable it
• Experience leveraging open source tools, predictive analytics, machine learning, Advanced Statistics, and other data techniques to perform analyses
• High proficiency in using Python or Scala, Spark, Hadoop platforms & tools (Hive, Impala, Airflow, NiFi, Scoop), SQL to build Big Data products & platforms
• Experience in building and deploying production-level data-driven applications and data processing workflows/pipelines and/or implementing machine learning systems at scale in Java, Scala, or Python and deliver analytics involving all phases like data ingestion, feature engineering, modeling, tuning, evaluating, monitoring, and presenting
• Experience in cloud technologies like Databricks/AWS/Azure
• Strong technologist with proven track record of learning new technologies and frameworks
• Customer-centric development approach
• Passion for analytical / quantitative problem solving
• Experience identifying and implementing technical improvements to development processes
• Collaboration skills with experience working with people across roles and geographies
• Motivation, creativity, self-direction, and desire to thrive on small project teams
• Superior academic record with a degree in Computer Science or related technical field
• Strong written and verbal English communication skills
Data Engineer
Posted today
Job Viewed
Job Description
Apple Services Engineering (ASE) is the organisation responsible for products such as Apple Music, Podcasts, TV+, tvOS, App Store, iCloud, and many others. We, at ASE Analytics and Data Engineering, are responsible for collecting, analysing and reporting on insights derived from user and device generated data from across all Apple media services. Reporting plays a crucial role in this process, enabling teams at Apple to gain valuable insights and make informed decisions about their daily activities. Reporting involves integrating data from multiple data pipelines managed by different teams, which presents challenges such as achieving clear visibility into the dependencies and SLAs of the contributing flows. To address these challenges, we are establishing a new team in Dublin to develop internal tools that will enhance our ability to manage these complexities.
Description
We are looking for a Data Engineer to join our Analytics & Data Engineering Knowledge graph team, designing and develop data pipelines. Our knowledge graph aims to unify insights on data processing, data lineage, and data infrastructure across the entire Services division, to generate a rich operational health view of our data pipelines. The graph will power operational excellence, driving incident analysis, guide agentic resolution of issues, and enabling proactive avoidance of future incidents. You'll work alongside a Dublin based team of software and other data engineers committed to bringing the knowledge graph to life. You will have significant individual responsibility and influence over the direction of this critical service, and the opportunity to learn and interact with a department of global teams each with unique skill sets and operating in different time zones.
Minimum Qualifications
- Bachelor's degree in a scientific field (Computer Science, Computer Engineering, Mathematics preferred)
- Industry experience in a Data Engineering
- Experience working with Spark and other distributed data technologies (e.g. Hadoop) for building efficient & large scale data pipelines.
Preferred Qualifications
- Growth mindset and ability to learn new technologies
- Good understanding of software development life cycle, version control, code reviews, testing, data quality tools and frameworks
- Familiar with Data Lake and Data Warehouse technologies
- Experience with job orchestrators and how they work (eg Airflow)
- Experience with streaming data pipelines (eg Kafka, Flink, Spark-Streaming)
Experience with a graph database (eg Neo4j, ArangoDB, TigerGraph)
Submit CV