204 Big Data Hadoop jobs in Ireland
Big Data Engineer
Posted today
Job Viewed
Job Description
About the Role:
We are looking for a Big Data Engineer to join one of our leading clients on an exciting project. The ideal candidate will have hands-on experience with large-scale data processing, Hadoop ecosystem tools, and cloud platforms, and will play a key role in building and optimizing data pipelines.
Tech Stack
- Programming Languages: Java / Scala / Python
- Data Processing Framework: Spark
- Big Data / Hadoop Frameworks: Hive, Impala, Oozie, Airflow, HDFS
- Cloud Experience: AWS, Azure, or GCP (services such as S3, Athena, EMR, Redshift, Glue, Lambda, etc.)
- Data & AI Platform: Databricks
Roles & Responsibilities
- Build, optimize, and maintain ETL pipelines using Hadoop ecosystem tools (HDFS, Hive, Spark).
- Collaborate with cross-functional teams to ensure efficient and reliable data processing workflows.
- Perform data modelling, implement quality checks, and carry out system performance tuning.
- Support modernization efforts, including migration and integration with cloud platforms and Databricks.
Preferred Qualifications
- Hands-on experience with large-scale data processing and distributed systems.
- Strong problem-solving and analytical skills.
- Familiarity with CI/CD pipelines and version control tools is a plus.
Job Types: Full-time, Permanent
Pay: €70,000.00-€85,000.00 per year
Work Location: In person
Application deadline: 10/10/2025
Reference ID: IJP - SBDE - DUBIR - 01
Expected start date: 19/10/2025
Lead Big Data Engineer
Posted today
Job Viewed
Job Description
Genesys empowers organizations of all sizes to improve loyalty and business outcomes by creating the best experiences for their customers and employees. Through Genesys Cloud, the AI-powered Experience Orchestration platform, organizations can accelerate growth by delivering empathetic, personalized experiences at scale to drive customer loyalty, workforce engagement, efficiency and operational improvements.
We employ more than 6,000 people across the globe who embrace empathy and cultivate collaboration to succeed. And, while we offer great benefits and perks like larger tech companies, our employees have the independence to make a larger impact on the company and take ownership of their work. Join the team and create the future of customer experience together.
The Genesys Cloud Analytics platform is the foundation on which decisions are made that directly impact our customer's experience as well as their customers' experiences. We are a data-driven company, handling tens of millions of events per day to answer questions for both our customers and the business. From new features to enable other development teams, to measuring performance across our customer-base, to offering insights directly to our end-users, we use our terabytes of data to move customer experience forward.
In this role, you will be a technical leader with your expertise working in our Batch Analytics team, which manages EMR pipelines on Airflow that process peta bytes of data. We're all about scale.
The best person will have a strong engineering background, not shy from the unknown, and will be able to articulate vague requirements into something real. We are a team whose focus is to operationalize big data products and curate high-value datasets for the wider organization as well as to build tools and services to expand the scope of and improve the reliability of the data platform as our usage continues to grow on a daily basis.
Summary:
- Build and manage large scale pipelines using Spark and Airflow.
- Develop and deploy highly-available, fault-tolerant software that will help drive improvements towards the features, reliability, performance, and efficiency of the Genesys Cloud Analytics platform.
- Actively review code, mentor, and provide peer feedback.
- Engineer efficient, adaptable and scalable architecture for all stages of data lifecycle (ingest, streaming, structured and unstructured storage, search, aggregation) in support of a variety of data applications.
- Build abstractions and re-usable developer tooling to allow other engineers to quickly build streaming/batch self-service pipelines.
- Build, deploy, maintain, and automate large global deployments in AWS.
- Troubleshoot production issues and come up with solutions as required.
This may be the perfect job for you if:
- You have engineered scalable software using big data technologies (e.g., Hadoop, Spark, Hive, Presto, Elasticsearch, etc).
- You have a strong engineering background with ability to design software systems from the ground up.
- You have expertise in Java. Python and other object-oriented languages are a plus.
- You have experience in web-scale data and large-scale distributed systems, ideally on cloud infrastructure.
- You have a product mindset. You are energized by building things that will be heavily used.
- Open to mentoring and collaborating with junior members of the team.
- Be adaptable and open to exploring new technologies and prototyping solutions within a reasonable cadence.
- You design not just with a mind for solving a problem, but also with maintainability, testability, monitorability, and automation as top concerns.
Technologies we use and practices we hold dear:
- Right tool for the right job over we-always-did-it-this-way.
- We pick the language and frameworks best suited for specific problems.
- Ansible for immutable machine images.
- AWS for cloud infrastructure.
- Automation for everything. CI/CD, testing, scaling, healing, etc.
- Hadoop and Spark for batch processing
- Airflow for orchestration.
- Dynamo, Elasticsearch, Presto, and S3 for query and storage.
If a Genesys employee referred you, please use the link they sent you to apply.
About Genesys:
Genesys empowers more than 8,000 organizations worldwide to create the best customer and employee experiences. With agentic AI at its core, Genesys Cloud is the AI-Powered Experience Orchestration platform that connects people, systems, data and AI across the enterprise. As a result, organizations can drive customer loyalty, growth and retention while increasing operational efficiency and teamwork across human and AI workforces. To learn more, visit
Reasonable Accommodations:
If you require a reasonable accommodation to complete any part of the application process, or are limited in your ability to access or use this online application and need an alternative method for applying, you or someone you know may contact us at
You can expect a response within 24–48 hours. To help us provide the best support, click the email link above to open a pre-filled message and complete the requested information before sending. If you have any questions, please include them in your email.
This email is intended to support job seekers requesting accommodations. Messages unrelated to accommodation—such as application follow-ups or resume submissions—may not receive a response.
Genesys is an equal opportunity employer committed to fairness in the workplace. We evaluate qualified applicants without regard to race, color, age, religion, sex, sexual orientation, gender identity or expression, marital status, domestic partner status, national origin, genetics, disability, military and veteran status, and other protected characteristics.
Please note that recruiters will never ask for sensitive personal or financial information during the application phase.
Senior Big Data Engineer
Posted today
Job Viewed
Job Description
New Roles - Senior Big Data Engineers in Galway - permanent or 11 months contract
Location: Galway - hybrid working - suited if located within commutable distance
Are you ready to take your software engineering career to the next level? Our client is on the lookout for a passionate and skilled Senior Software Engineer to join their innovative team. This is your chance to work on a highly strategic initiative focused on developing cutting-edge performance measurement and analytics software.
Why Join?
At our client, they believe in fostering a culture of collaboration, creativity, and continuous learning. Here, you'll have the opportunity to work with a diverse range of technologies, allowing you to leverage your existing skills while also expanding your knowledge.
What You'll Do:
Collaborate with a dynamic team of talented engineers to design and develop scalable, robust data platforms.
Utilise your experience in database technologies, particularly Snowflake and Oracle, to enhance our performance measurement capabilities.
Engage in Object-Oriented Software development using Java and apply your hands-on experience with Spark (Java or Scala).
Contribute to building efficient ETL data flows and ensure high-quality software delivery through DevOps practises.
Work in an agile scrum development environment, promoting innovative solutions and best practises.
The Expertise We're Looking For:
Bachelor's or Master's Degree in a technology-related field (e.g., Engineering, Computer Science) with a minimum of 5+ years of design and development experience.
Strong expertise in database technologies, particularly Snowflake and Oracle.
Proficiency in Object-Oriented Software development with Java.
Hands-on experience with Spark (Java or Scala)
Familiarity with AWS EMR is a plus.
Experience with Cloud technologies (AWS), including Docker and EKS.
Proven ability to build scalable and robust ETL data flows.
Strong design and analysis skills for large data platforms.
Familiarity with DevOps tools and practises (Maven, Jenkins, GitHub, Terraform, Docker).
Excellent interpersonal, communication, and collaboration skills.
What's In It For You?
A vibrant workplace that encourages sharing and collaboration.
Opportunities for growth and continuous learning in a supportive environment.
The chance to contribute to innovative projects that impact the financial industry.
If you are a motivated individual who thrives in a collaborative setting and is excited about leveraging your skills to drive success, we want to hear from you
Join us and be a part of a team that values your input and expertise.
Apply Today
Embrace the opportunity to make a difference in a dynamic workplace. Let's shape the future together
Adecco is a disability-confident employer. It is important to us that we run an inclusive and accessible recruitment process to support candidates of all backgrounds and all abilities to apply. Adecco is committed to building a supportive environment for you to explore the next steps in your career. If you require reasonable adjustments at any stage, please let us know and we will be happy to support you.
Adecco Ireland is acting as an Employment Agency in relation to this vacancy.
Data Engineer
Posted today
Job Viewed
Job Description
The Company:
From our roots in Ireland, CarTrawler has grown into the leading B2B technology provider of car rental and mobility solutions to the global travel industry. If you've ever booked a flight and seen the option to rent a car, that was probably us; but it's our people that make everything we do possible – and we're growing
At CarTrawler, you'll find more than just a job. You'll find flexibility, meaningful impact, and a culture built by the people who live it every day. Our culture is built on high performance, genuine connection, and a shared commitment to making an impact, without sacrificing personal wellbeing. With flexible working models, meaningful time off, and dedicated growth opportunities, we enable people to do great work and feel good doing it.
We have a hybrid working policy with two mandatory days a week in our Dublin office, you have the freedom to design a routine that supports your productivity and personal life. The office offers ample car parking, a heavily subsidized (KC Peaches) canteen, convenient proximity to the Luas, and access to EV charging stations.
Role Purpose:
We are seeking a Data Engineer on a 6 month fixed term contract to develop and maintain our Snowflake data warehouse and data marts, designing and optimizing ETL processes and data models to ensure accuracy and scalability. The role requires strong skills in SQL, Python, and stored procedures, with hands-on use of Snowflake, Airflow MWAA, Soda, and DBT. Working closely with Data Engineering and wider P&T teams, you will build secure, high-performing data solutions that support business-critical initiatives.
Responsibilities & Accountabilities
- Build & Optimize Data Pipelines: Design, construct, and maintain robust, scalable ETL/ELT pipelines using dbt and Airflow to integrate new data sources and manage changes to SQL jobs.
- Troubleshooting issues with existing SQL/ETL processes and data loads to drive to solution.
- Design and build extensible data models and integration solutions using various tools such as Snowflake functionality, Airflow, Soda, DBT, AWS s3.
- Implement and enforce best practices for data quality, testing, and documentation to ensure data is accurate, consistent, and trustworthy.
- Continuously optimize our Snowflake data warehouse, refining data models and architecture to improve query performance, scalability, and cost-efficiency.
Skills & Experience Required
- 3+ years of experience in a Data Engineering or similar role.
- Hands-on experience with building data pipelines, data models and implementing data quality.
- Experience accessing and manipulating structured and semi-structured data(JSON) in various cloud data environments. Any of the following technologies Snowflake, Redshift, Hadoop, Spark, Cloud Storage, AWS, consuming data from APIs.
- Expert proficiency in Python(or scala) for data manipulation and scripting, advanced SQL and database stored procedures for complex querying and data modelling.
- Experience with orchestration and monitoring tools such as Airflow
- Solid understanding of ETL/ELT design patterns, data modelling principles and database architecture.
- Experience of full stack development, implementing continuous integration and automated tests e.g. GitHub, Jenkins
- Proven ability to work creatively and analytically in a fast-paced, problem-solving environment.
- Excellent communication (verbal and written) and interpersonal skills.
- Proven ability to communicate complex analysis in a clear, precise, and actionable manner
Research shows that individuals from underrepresented backgrounds often hesitate to apply for roles unless they meet every single qualification, while others may apply when they meet only a portion of the criteria. If you believe you have the skills and potential to succeed in this role, even if you don't meet every listed requirement, we encourage you to apply. We'd love to hear from you and explore whether you could be a great fit.
Data Engineer
Posted today
Job Viewed
Job Description
Data Engineer – Dublin OR London (Hybrid)
Permanent, Full-time Role
€90,000 / £78,000 (approx.) + Benefits
Overview
We are seeking a skilled Data Engineer to join our client's product engineering team, supporting Business Intelligence and Data Science initiatives primarily using AWS technologies. You will collaborate closely with their corporate technology team to build, automate, and maintain AWS infrastructure.
This role requires a highly competent, detail-oriented individual who stays current with evolving data engineering technologies.
Key Responsibilities
- Collaborate with data consumers, producers, and compliance teams to define requirements for data solutions with executive-level impact.
- Design, build, and maintain solutions for Business Intelligence and Data Science on AWS, including:
- Data ingestion pipelines
- ETL/ELT processes (batch and streaming)
- Curated data products
- Integrations with third-party tools
- Support and enhance the data lake, enterprise data catalog, cloud data warehouse, and data processing infrastructure.
- Provision and manage AWS services and infrastructure as code using Amazon CDK.
- Provide input on product/vendor selection, technology strategies, and architectural design.
- Identify and implement improvements to reduce waste, complexity, and redundancy.
- Manage workload efficiently to meet service levels and KPIs.
- Execute incident, problem, and change management processes as required.
Qualifications
- Degree in Information Systems, Computer Science, Statistics, or a related quantitative field.
- 3+ years experience with Spark in production environments, handling batch and stream processing jobs. Exposure to Ray is advantageous.
- 3+ years experience with cloud data warehousing tools such as Snowflake, Redshift, BigQuery, or ClickHouse.
- Expert SQL skills, with exposure to HiveQL.
- Proficiency in Java, Scala, Python, and Typescript programming.
- Strong understanding of AWS security mechanisms, particularly relating to S3, Kinesis, EMR, Glue, and LakeFormation.
- Experience with GitHub, DataDog, and AWS.
- Proven ability to learn and apply open-source tools independently.
- Strong ownership mindset and proactive approach.
Data Engineer
Posted today
Job Viewed
Job Description
We are seeking a highly skilled
Azure Data Engineer with strong experience in Dynamics 365
to join our team and support the delivery of our Dynamics 365 implementation. The ideal candidate will have extensive experience in Azure and Azure Data Factory and a strong background in integrating systems using various technologies and patterns. This role involves creating API interfaces using Azure Integration Services and integrating with Dynamics 365, leveraging Dataflows, ODATA, and other patterns.
Key Responsibilities:
- Design and implement scalable ETL/ELT pipelines using Azure Data Factory (ADF), including mapping data flows and parameterized pipelines.
- Integrate with Microsoft Dataverse and Dynamics 365 using ADF's native connectors, OData endpoints, and REST APIs.
- Collaborate with cross-functional teams to understand integration requirements and deliver robust solutions.
- Develop and optimise dataflows for transformation logic, including column reduction, lookups, and upsert strategies
- Implement delta load strategies and batch endpoints to manage large-scale data synchronisation efficiently
- Ensure data consistency and integrity across systems by leveraging unique identifiers and business keys.
- Collaborate with business analysts and architects to translate business requirements into technical solutions.
- Monitor, troubleshoot, and optimise pipeline performance using ADF's built-in diagnostics and logging tools.
- Contribute to data governance, security, and compliance by implementing best practices in access control and data handling.
- Collaborate on the development of new standards and practices to improve the quality, capability, and velocity of the team.
- Mentoring and coaching of junior team members new to Azure to ensure that the team builds up a strong capability
Requirements:
- ADF Expertise. Proficient in building pipelines, triggers, and mapping dataflows. Experience with parameterisation, error handling, and pipeline orchestration
- OData & API Integration. Strong understanding of the OData protocol. Experience integrating with REST APIs
- Dataverse & Dynamics 365. Hands-on experience with Dataverse schema, connector configuration, and data model mapping. Familiarity with D365 entity relationships
- Data Modelling. Understanding of conceptual, logical, and physical data models.
- SQL & Scripting. Strong SQL skills for data extraction and transformation
- Testing & Automation. Experience with end-to-end testing of data pipelines and automation of data validation processes
- Documentation & Collaboration. Ability to document data flows, transformation logic, and integration patterns. Strong communication skills for cross-functional collaboration.
- Azure Developer certification is highly desirable.
- Excellent problem-solving skills and the ability to work collaboratively in a team environment.
- Dynamics 365 experience is a must
Data Engineer
Posted today
Job Viewed
Job Description
Role Description
The Data Engineer is responsible for designing, building, and maintaining scalable data pipelines and architectures that enable efficient data collection, processing, and analysis. This role ensures that high-quality, reliable data is available to support business intelligence, analytics, and machine learning initiatives. The ideal candidate is technically strong, detail-oriented, and passionate about building robust data systems that transform raw data into actionable insights.
Key Responsibilities
- Design, develop, and optimize data pipelines, ETL/ELT processes, and workflows for structured and unstructured data.
- Build and maintain scalable data architectures that support data warehousing, analytics, and reporting needs.
- Integrate data from multiple sources such as APIs, databases, and third-party systems into centralized data platforms.
- Collaborate with data analysts, data scientists, and business teams to understand data requirements and ensure data accuracy and availability.
- Develop and enforce best practices for data governance, security, and quality assurance.
- Monitor, troubleshoot, and optimize data processes for performance and cost efficiency.
- Implement data validation, cleansing, and transformation procedures to maintain data integrity.
- Work with cloud platforms (e.g., AWS, Azure, GCP) to manage data storage, orchestration, and automation tools.
- Create and maintain documentation for data models, data flow diagrams, and pipeline configurations.
- Support the development of analytics and machine learning pipelines by providing clean and well-structured datasets.
- Collaborate with DevOps teams to deploy, scale, and maintain data infrastructure in production environments.
- Continuously improve data engineering practices through automation, monitoring, and innovation.
- Stay updated on emerging technologies and trends in data architecture, big data, and cloud computing.
Qualifications
- Bachelor's or Master's degree in Computer Science, Information Systems, Data Engineering, or a related field.
- 2–5 years of experience in data engineering, data warehousing, or database development.
- Strong proficiency in SQL and at least one programming language (Python, Java, or Scala preferred).
- Hands-on experience with ETL tools and frameworks (e.g., Apache Airflow, dbt, or Talend).
- Experience with big data technologies such as Spark, Hadoop, or Kafka.
- Familiarity with cloud-based data services (AWS Redshift, Google BigQuery, Azure Synapse, or Snowflake).
- Solid understanding of data modeling, schema design, and database management (relational and NoSQL).
- Knowledge of APIs, data integration, and data streaming methodologies.
- Strong problem-solving, analytical, and debugging skills.
- Excellent collaboration and communication abilities to work cross-functionally.
- Experience with containerization tools (Docker, Kubernetes) and CI/CD pipelines is a plus.
- Commitment to building efficient, scalable, and reliable data systems that support business growth.
Be The First To Know
About the latest Big data hadoop Jobs in Ireland !
Data Engineer
Posted today
Job Viewed
Job Description
Data Engineer - Flexible Working
Permanent | Hybrid | Dublin
€50,000 - €0,000 DOE
TechHeads is excited to bring you a new opportunity for a Data Engineer to join a growing team within a forward-thinking organisation that values collaboration, innovation, and continuous improvement.
In this position, you will be responsible for gathering requirements from stakeholders and building complex and scalable reports that will be used by users throughout the organisation. You will work with modern tools such as, Power BI, DAX, Azure and more, allowing you to develop your technical skillset with industry relevant tech.
This fulltime role, based in Dublin, will give you the opportunity to join an employee focused organisation. They support a flexible working model of 2 days onsite as well as a culture of internal progression, offering you excellent work-life balance and growth potential.
If you're looking for an impactful role where you can work with flexibility and modern technologies, this role is for you
Required:
- 2+ years experience developing interactive dashboards and reports in Power BI.
- 2+ years experience working SQL.
- Experience working with DAX for creating measures, calculated columns, and complex business logic.
- Experience with ETL or any Data Engineering related activities.
- Experience implementing security and RLS in Power BI and Azure.
- Strong analytical and problem-solving skills.
Salary: ,000 - ,000 DOE
Benefits: Pension, Flexible Working and more
If you would like to be considered for this position, please share a copy of your updated CV to
Data Engineer
Posted today
Job Viewed
Job Description
Data Engineer – 6 Month Contract, Hybrid, Dublin
What's Involved
- Build and maintain data pipelines using Snowflake.
- Optimise Snowflake architecture and models for performance and scalability.
- Translate business requirements into technical solutions in collaboration with analysts and architects.
- Implement data governance, access control, and security best practices.
- Monitor, troubleshoot, and fine-tune pipeline performance using ADF and Snowflake tools.
- Document data flows, transformations, and integrations clearly and consistently.
- Mentor and support junior engineers in data engineering best practices.
What's Needed
- 7+ years' experience in data engineering or integration roles.
- Strong Snowflake and SQL Server skills for data design and optimisation.
- Proven ability with ADF pipelines, including parameterisation and error handling.
- Solid grasp of data modelling and modern data-warehouse design.
- Experience with data testing, automation, and validation.
- Strong collaboration and documentation skills.
- Snowflake or Azure Data Engineer certification preferred.
Data Engineer
Posted today
Job Viewed
Job Description
Data Engineer
Permanent
Dublin/Hybrid
Requirements
We are looking for an innovative data engineer who will lead the technical design and development of an Analytic Foundation. The Analytic Foundation is a suite of individually commercialized analytical capabilities that also includes a comprehensive data platform. These services will be offered through a series of APIs that deliver data and insights from various points along a central data store. This individual will partner closely with other areas of the business to build and enhance solutions that drive value for our customers.
Position Responsibilities:
As a Data Engineer within Advanced Analytics team, you will:
• Play a large role in the implementation of complex features
• Push the boundaries of analytics and powerful, scalable applications
• Build and maintain analytics and data models to enable performant and scalable products
• Ensure a high-quality code base by writing and reviewing performant, well-tested code
• Mentor junior engineers and teammates
• Drive innovative improvements to team development processes
• Partner with Product Managers and Customer Experience Designers to develop a deep understanding of users and use cases and apply that knowledge to scoping and building new modules and features
• Collaborate across teams with exceptional peers who are passionate about
what they do
Ideal Candidate Qualifications:
• 4+ years of full stack engineering experience in an agile production environment
• Experience leading the design and implementation of large, complex features in full-stack applications
• Ability to easily move between business, data management, and technical teams; ability to quickly intuit the business use case and identify technical solutions to enable it
• Experience leveraging open source tools, predictive analytics, machine learning, Advanced Statistics, and other data techniques to perform analyses
• High proficiency in using Python or Scala, Spark, Hadoop platforms & tools (Hive, Impala, Airflow, NiFi, Scoop), SQL to build Big Data products & platforms
• Experience in building and deploying production-level data-driven applications and data processing workflows/pipelines and/or implementing machine learning systems at scale in Java, Scala, or Python and deliver analytics involving all phases like data ingestion, feature engineering, modeling, tuning, evaluating, monitoring, and presenting
• Experience in cloud technologies like Databricks/AWS/Azure
• Strong technologist with proven track record of learning new technologies and frameworks
• Customer-centric development approach
• Passion for analytical / quantitative problem solving
• Experience identifying and implementing technical improvements to development processes
• Collaboration skills with experience working with people across roles and geographies
• Motivation, creativity, self-direction, and desire to thrive on small project teams
• Superior academic record with a degree in Computer Science or related technical field
• Strong written and verbal English communication skills