205 Hadoop jobs in Ireland
Senior Hadoop Software Engineer
Posted today
Job Viewed
Job Description
At eBay, we're more than a global ecommerce leader — we're changing the way the world shops and sells. Our platform empowers millions of buyers and sellers in more than 190 markets around the world. We're committed to pushing boundaries and leaving our mark as we reinvent the future of ecommerce for enthusiasts.
Our customers are our compass, authenticity thrives, bold ideas are welcome, and everyone can bring their unique selves to work — every day. We're in this together, sustaining the future of our customers, our company, and our planet.
Join a team of passionate thinkers, innovators, and dreamers — and help us connect people and build communities to create economic opportunity for all.
About The Team And The Role
Our Hadoop team plays a pivotal role in eBay's data infrastructure, ensuring robust and scalable solutions for data processing and storage. We are aligned with eBay's strategic goals to leverage data-driven insights for enhanced decision-making and customer experience. By managing and optimizing Hadoop-related projects, we support the company's commitment to innovation and operational excellence.
The Role Within The Hadoop Team Involves
- Scope: Overseeing and enhancing Hadoop-related projects to meet eBay's extensive data scale requirements, creating customer-facing tools, and ensuring smooth integration with other systems.
- Impact: Directly influencing eBay's data strategy by enhancing data processing capabilities, improving system performance, and driving innovation across the organization
What You Will Accomplish
- Enhance System Availability and Scalability: Spearhead efforts to optimize Hadoop-related projects, ensuring the system is highly available and scalable to meet eBay's growing data demands. Your work will be pivotal in maintaining uninterrupted service and accommodating future growth, supporting eBay's strategic objectives.
- Drive High-Impact Projects: Lead initiatives that directly contribute to eBay's ability to efficiently process and analyze vast amounts of data. Your enhancements will bolster system performance and reliability, vital for sustaining eBay's competitive edge.
- Develop Innovative Solutions: Create and refine customer-facing tools that improve user experience and operational efficiency. Your contributions will streamline data access and management, making it easier for stakeholders to leverage insights.
- Enhance Integration: Ensure seamless integration of Hadoop systems with other platforms, fostering a cohesive data ecosystem. Your work will enable cross-functional teams to reduce the manual work, utilize data insights effectively, driving informed decision-making across the organization.
What You Will Bring
- Bachelor's degree in Computer Science, Information Technology, or a related field.
- Strong programming skills in languages commonly used with Hadoop, such as Java, Scala, Python. Knowledge of common algorithms and data structures to write efficient and optimized code.
- Familiarity with Linux/Unix systems, including shell scripting and system commands. Understanding of networking principles, as Hadoop often operates in distributed environments. Overall, the candidate is expected to have analytical skills to tackle complex distributed system challenges and optimize solutions.
- (Optional)Experience with big data technologies and frameworks, contribution to related open source projects is a plus.
Please see the Talent Privacy Notice for information regarding how eBay handles your personal data collected when you use the eBay Careers website or apply for a job with eBay.
eBay is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, sex, sexual orientation, gender identity, veteran status, and disability, or other legally protected status. If you have a need that requires accommodation, please contact us at We will make every effort to respond to your request for accommodation as soon as possible. View our accessibility statement to learn more about eBay's commitment to ensuring digital accessibility for people with disabilities.
The eBay Jobs website uses cookies to enhance your experience. By continuing to browse the site, you agree to our use of cookies. Visit our Privacy Center for more information.
Pharmacovigilance Data Analysis Manager
Posted 19 days ago
Job Viewed
Job Description
**We believe that diversity adds value to our business, our teams, and our culture. We are committed to equal employment opportunity fostering an inclusive environment where diversity makes us be outstanding.**
Help us lead one of the world's largest pharmaceutical companies. We are a world leader in plasma-derived medicines with a presence in more than 100 countries, and a growing global team of over 20.000 people. That's why we need a _Pharmacovigilance Data Analysis Manager_ like you.
Role Mission: Provide operational support for global pharmacovigilance activities related to Grifols' investigational and marketed products. Ensure high-quality pharmacovigilance deliverables that comply with global regulatory reporting timelines. Manage project implementation and execution of safety systems, including ongoing business support and continuous improvement initiatives. Act as a key liaison with IT system administrators to validate and test system changes, ensuring compliance and alignment with business needs.
**What your responsibilities will be**
+ Lead and coordinate safety data analysis for aggregate reports preparation, signal management and ad hoc requests.
+ Support drug safety systems through business administration tasks, including database configuration updates, submission rules management and testing with regulatory authorities.
+ Serve as a subject matter expert in delivering and evaluating cost-effective, sustainable solutions that meet business requirements.
+ Manage documentation related to PV systems and applications, including SOPs, WPs, user requirements, functional/ technical specifications and process flow diagrams.
+ Drive change management initiatives to ensure smooth adoption of new processes and support the integration of new applications within the PV team.
+ Collect, prioritize and plan system improvements based on user feedback, while ensuring compliance with regulatory requirements.
+ Act as the primary PV contact for IT-related PV projects.
+ Drive Innovation through AI in Pharmacovigilance: Stay at the forefront of artificial intelligence advancements to identify and evaluate innovative technologies and processes that can enhance pharmacovigilance operations. This includes proactively assessing AI-driven tools and methodologies to improve signal detection, case processing, data analysis and regulatory compliance. Collaborate cross-functionally to pilot and implement solutions that increase efficiency, accuracy, and strategic value in safety monitoring.
**Who you are**
To perform this job successfully, an individual must be able to perform each essential duty satisfactorily. The requirements listed below are representative of the knowledge, skills, education, and/or ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions.
+ You have a bachelor's degree in Health Sciences (pharmacy, nursing, medicine, veterinary, etc) or Bioscience (biochemistry, biotechnology, biology, etc). Additional training and/or experience in bioinformatics/biostatistics or data analysis tools (such as R or Power BI).
+ You have at least 4 years of pharmacovigilance experience including management of pharmacovigilance data bases.
+ You have proven knowledge of Good Pharmacovigilance Practices; existing legislation, regulations, guidelines, medical coding and safety-data administration.
+ You are proficient in Windows and MS Office (Excel, PowerPoint, Visio, Word).
+ Familiarity with reporting tools such as Business Objects is strongly preferred.
+ You have knowledge of E2b (R2) and E2b (R3) and to be familiarity with medical terminology, MedDRA, WhoDrug are a plus.
+ You speak fluent Spanish and English.
+ You are proven self-starter with strong work ethic and the ability to exercise good judgment.
+ You must be proactive, results oriented and have strong attention to detail.
+ Strong organizational, analytical and problem-solving skills with the ability to make structured decisions on a routine basis.
+ Strong interpersonal skills with the ability to interact and collaborate with personnel at all levels in a team environment.
+ You possess strong technical writing and communication skills with ability to create and present design proposals, test scripts, execute training sessions and conduct effective meetings.
+ Ability to effectively prioritize and manage multiple tasks to ensure successful completion targeted deadlines.
**What we offer**
It is a brilliant opportunity for you, Grifols is fully aware that its employees are one of its major assets. We are committed to maintaining an atmosphere that encourages all our employees to develop their professional careers in an excellent working environment.
Information about Grifols is available at If you are interested in joining our company and you have what it takes for such an exciting position, then don't hesitate to apply!
We look forward to receiving your application!
**We believe in diverse talent and want to remove any barriers that may hinder your participation. If you require any adjustments in our selection process, please do not hesitate to inform us when applying. We are here to help.**
Grifols is an equal opportunity employer.
**Flexible schedule:** Monday-Thursday 7-10 to 16-19h and Friday 8-15h (with the same flexible start time).
**Benefits package**
**Contract of Employment:** Permanent position
**Flexibility for U Program:** Hybrid
**Location:** Sant Cugat del Vallès (preferably) / Other locations as Los Angeles, Clayton or Dublin will be considered
more about Grifols
**Req ID:**
**Type:** Indefinido tiempo completo
**Job Category:** I + D
Big Data Engineer
Posted today
Job Viewed
Job Description
About the Role:
We are looking for a Big Data Engineer to join one of our leading clients on an exciting project. The ideal candidate will have hands-on experience with large-scale data processing, Hadoop ecosystem tools, and cloud platforms, and will play a key role in building and optimizing data pipelines.
Tech Stack
- Programming Languages: Java / Scala / Python
- Data Processing Framework: Spark
- Big Data / Hadoop Frameworks: Hive, Impala, Oozie, Airflow, HDFS
- Cloud Experience: AWS, Azure, or GCP (services such as S3, Athena, EMR, Redshift, Glue, Lambda, etc.)
- Data & AI Platform: Databricks
Roles & Responsibilities
- Build, optimize, and maintain ETL pipelines using Hadoop ecosystem tools (HDFS, Hive, Spark).
- Collaborate with cross-functional teams to ensure efficient and reliable data processing workflows.
- Perform data modelling, implement quality checks, and carry out system performance tuning.
- Support modernization efforts, including migration and integration with cloud platforms and Databricks.
Preferred Qualifications
- Hands-on experience with large-scale data processing and distributed systems.
- Strong problem-solving and analytical skills.
- Familiarity with CI/CD pipelines and version control tools is a plus.
Job Types: Full-time, Permanent
Pay: €70,000.00-€85,000.00 per year
Work Location: In person
Application deadline: 10/10/2025
Reference ID: IJP - SBDE - DUBIR - 01
Expected start date: 19/10/2025
Lead Big Data Engineer
Posted today
Job Viewed
Job Description
Genesys empowers organizations of all sizes to improve loyalty and business outcomes by creating the best experiences for their customers and employees. Through Genesys Cloud, the AI-powered Experience Orchestration platform, organizations can accelerate growth by delivering empathetic, personalized experiences at scale to drive customer loyalty, workforce engagement, efficiency and operational improvements.
We employ more than 6,000 people across the globe who embrace empathy and cultivate collaboration to succeed. And, while we offer great benefits and perks like larger tech companies, our employees have the independence to make a larger impact on the company and take ownership of their work. Join the team and create the future of customer experience together.
The Genesys Cloud Analytics platform is the foundation on which decisions are made that directly impact our customer's experience as well as their customers' experiences. We are a data-driven company, handling tens of millions of events per day to answer questions for both our customers and the business. From new features to enable other development teams, to measuring performance across our customer-base, to offering insights directly to our end-users, we use our terabytes of data to move customer experience forward.
In this role, you will be a technical leader with your expertise working in our Batch Analytics team, which manages EMR pipelines on Airflow that process peta bytes of data. We're all about scale.
The best person will have a strong engineering background, not shy from the unknown, and will be able to articulate vague requirements into something real. We are a team whose focus is to operationalize big data products and curate high-value datasets for the wider organization as well as to build tools and services to expand the scope of and improve the reliability of the data platform as our usage continues to grow on a daily basis.
Summary:
- Build and manage large scale pipelines using Spark and Airflow.
- Develop and deploy highly-available, fault-tolerant software that will help drive improvements towards the features, reliability, performance, and efficiency of the Genesys Cloud Analytics platform.
- Actively review code, mentor, and provide peer feedback.
- Engineer efficient, adaptable and scalable architecture for all stages of data lifecycle (ingest, streaming, structured and unstructured storage, search, aggregation) in support of a variety of data applications.
- Build abstractions and re-usable developer tooling to allow other engineers to quickly build streaming/batch self-service pipelines.
- Build, deploy, maintain, and automate large global deployments in AWS.
- Troubleshoot production issues and come up with solutions as required.
This may be the perfect job for you if:
- You have engineered scalable software using big data technologies (e.g., Hadoop, Spark, Hive, Presto, Elasticsearch, etc).
- You have a strong engineering background with ability to design software systems from the ground up.
- You have expertise in Java. Python and other object-oriented languages are a plus.
- You have experience in web-scale data and large-scale distributed systems, ideally on cloud infrastructure.
- You have a product mindset. You are energized by building things that will be heavily used.
- Open to mentoring and collaborating with junior members of the team.
- Be adaptable and open to exploring new technologies and prototyping solutions within a reasonable cadence.
- You design not just with a mind for solving a problem, but also with maintainability, testability, monitorability, and automation as top concerns.
Technologies we use and practices we hold dear:
- Right tool for the right job over we-always-did-it-this-way.
- We pick the language and frameworks best suited for specific problems.
- Ansible for immutable machine images.
- AWS for cloud infrastructure.
- Automation for everything. CI/CD, testing, scaling, healing, etc.
- Hadoop and Spark for batch processing
- Airflow for orchestration.
- Dynamo, Elasticsearch, Presto, and S3 for query and storage.
If a Genesys employee referred you, please use the link they sent you to apply.
About Genesys:
Genesys empowers more than 8,000 organizations worldwide to create the best customer and employee experiences. With agentic AI at its core, Genesys Cloud is the AI-Powered Experience Orchestration platform that connects people, systems, data and AI across the enterprise. As a result, organizations can drive customer loyalty, growth and retention while increasing operational efficiency and teamwork across human and AI workforces. To learn more, visit
Reasonable Accommodations:
If you require a reasonable accommodation to complete any part of the application process, or are limited in your ability to access or use this online application and need an alternative method for applying, you or someone you know may contact us at
You can expect a response within 24–48 hours. To help us provide the best support, click the email link above to open a pre-filled message and complete the requested information before sending. If you have any questions, please include them in your email.
This email is intended to support job seekers requesting accommodations. Messages unrelated to accommodation—such as application follow-ups or resume submissions—may not receive a response.
Genesys is an equal opportunity employer committed to fairness in the workplace. We evaluate qualified applicants without regard to race, color, age, religion, sex, sexual orientation, gender identity or expression, marital status, domestic partner status, national origin, genetics, disability, military and veteran status, and other protected characteristics.
Please note that recruiters will never ask for sensitive personal or financial information during the application phase.
Senior Big Data Engineer
Posted today
Job Viewed
Job Description
New Roles - Senior Big Data Engineers in Galway - permanent or 11 months contract
Location: Galway - hybrid working - suited if located within commutable distance
Are you ready to take your software engineering career to the next level? Our client is on the lookout for a passionate and skilled Senior Software Engineer to join their innovative team. This is your chance to work on a highly strategic initiative focused on developing cutting-edge performance measurement and analytics software.
Why Join?
At our client, they believe in fostering a culture of collaboration, creativity, and continuous learning. Here, you'll have the opportunity to work with a diverse range of technologies, allowing you to leverage your existing skills while also expanding your knowledge.
What You'll Do:
Collaborate with a dynamic team of talented engineers to design and develop scalable, robust data platforms.
Utilise your experience in database technologies, particularly Snowflake and Oracle, to enhance our performance measurement capabilities.
Engage in Object-Oriented Software development using Java and apply your hands-on experience with Spark (Java or Scala).
Contribute to building efficient ETL data flows and ensure high-quality software delivery through DevOps practises.
Work in an agile scrum development environment, promoting innovative solutions and best practises.
The Expertise We're Looking For:
Bachelor's or Master's Degree in a technology-related field (e.g., Engineering, Computer Science) with a minimum of 5+ years of design and development experience.
Strong expertise in database technologies, particularly Snowflake and Oracle.
Proficiency in Object-Oriented Software development with Java.
Hands-on experience with Spark (Java or Scala)
Familiarity with AWS EMR is a plus.
Experience with Cloud technologies (AWS), including Docker and EKS.
Proven ability to build scalable and robust ETL data flows.
Strong design and analysis skills for large data platforms.
Familiarity with DevOps tools and practises (Maven, Jenkins, GitHub, Terraform, Docker).
Excellent interpersonal, communication, and collaboration skills.
What's In It For You?
A vibrant workplace that encourages sharing and collaboration.
Opportunities for growth and continuous learning in a supportive environment.
The chance to contribute to innovative projects that impact the financial industry.
If you are a motivated individual who thrives in a collaborative setting and is excited about leveraging your skills to drive success, we want to hear from you
Join us and be a part of a team that values your input and expertise.
Apply Today
Embrace the opportunity to make a difference in a dynamic workplace. Let's shape the future together
Adecco is a disability-confident employer. It is important to us that we run an inclusive and accessible recruitment process to support candidates of all backgrounds and all abilities to apply. Adecco is committed to building a supportive environment for you to explore the next steps in your career. If you require reasonable adjustments at any stage, please let us know and we will be happy to support you.
Adecco Ireland is acting as an Employment Agency in relation to this vacancy.
Big Data Operations Engineer
Posted today
Job Viewed
Job Description
Job Responsibilities:
- Responsible for the operation, maintenance, deployment, management, scaling, and optimization of core big data storage components. Ensure service stability and availability, and identify and resolve performance bottlenecks.
- Support various business teams regarding the use of big data components and assist in technical solution selection.
- Responsible for the operation and maintenance of the big data platform, ensuring the stability and availability of both real-time and offline services, and identifying and resolving performance issues.
- Promote the development of automated operation platforms to enhance the efficiency of operational work.
Job Requirements:
- Familiar with the Hadoop/Elasticsearch ecosystem; possess a solid understanding of mainstream distributed development suites such as Elasticsearch/HBase/Hive/Kafka/Zookeeper/Yarn/MR/Spark/Flink. Experience in installation, operation, and performance tuning is preferred.
- Proficient in basic command operations of Linux-based operating systems and capable of writing scripts for daily operational tasks.
Data Engineer
Posted today
Job Viewed
Job Description
The Company:
From our roots in Ireland, CarTrawler has grown into the leading B2B technology provider of car rental and mobility solutions to the global travel industry. If you've ever booked a flight and seen the option to rent a car, that was probably us; but it's our people that make everything we do possible – and we're growing
At CarTrawler, you'll find more than just a job. You'll find flexibility, meaningful impact, and a culture built by the people who live it every day. Our culture is built on high performance, genuine connection, and a shared commitment to making an impact, without sacrificing personal wellbeing. With flexible working models, meaningful time off, and dedicated growth opportunities, we enable people to do great work and feel good doing it.
We have a hybrid working policy with two mandatory days a week in our Dublin office, you have the freedom to design a routine that supports your productivity and personal life. The office offers ample car parking, a heavily subsidized (KC Peaches) canteen, convenient proximity to the Luas, and access to EV charging stations.
Role Purpose:
We are seeking a Data Engineer on a 6 month fixed term contract to develop and maintain our Snowflake data warehouse and data marts, designing and optimizing ETL processes and data models to ensure accuracy and scalability. The role requires strong skills in SQL, Python, and stored procedures, with hands-on use of Snowflake, Airflow MWAA, Soda, and DBT. Working closely with Data Engineering and wider P&T teams, you will build secure, high-performing data solutions that support business-critical initiatives.
Responsibilities & Accountabilities
- Build & Optimize Data Pipelines: Design, construct, and maintain robust, scalable ETL/ELT pipelines using dbt and Airflow to integrate new data sources and manage changes to SQL jobs.
- Troubleshooting issues with existing SQL/ETL processes and data loads to drive to solution.
- Design and build extensible data models and integration solutions using various tools such as Snowflake functionality, Airflow, Soda, DBT, AWS s3.
- Implement and enforce best practices for data quality, testing, and documentation to ensure data is accurate, consistent, and trustworthy.
- Continuously optimize our Snowflake data warehouse, refining data models and architecture to improve query performance, scalability, and cost-efficiency.
Skills & Experience Required
- 3+ years of experience in a Data Engineering or similar role.
- Hands-on experience with building data pipelines, data models and implementing data quality.
- Experience accessing and manipulating structured and semi-structured data(JSON) in various cloud data environments. Any of the following technologies Snowflake, Redshift, Hadoop, Spark, Cloud Storage, AWS, consuming data from APIs.
- Expert proficiency in Python(or scala) for data manipulation and scripting, advanced SQL and database stored procedures for complex querying and data modelling.
- Experience with orchestration and monitoring tools such as Airflow
- Solid understanding of ETL/ELT design patterns, data modelling principles and database architecture.
- Experience of full stack development, implementing continuous integration and automated tests e.g. GitHub, Jenkins
- Proven ability to work creatively and analytically in a fast-paced, problem-solving environment.
- Excellent communication (verbal and written) and interpersonal skills.
- Proven ability to communicate complex analysis in a clear, precise, and actionable manner
Research shows that individuals from underrepresented backgrounds often hesitate to apply for roles unless they meet every single qualification, while others may apply when they meet only a portion of the criteria. If you believe you have the skills and potential to succeed in this role, even if you don't meet every listed requirement, we encourage you to apply. We'd love to hear from you and explore whether you could be a great fit.
Be The First To Know
About the latest Hadoop Jobs in Ireland !
Data Engineer
Posted today
Job Viewed
Job Description
Data Engineer – Dublin OR London (Hybrid)
Permanent, Full-time Role
€90,000 / £78,000 (approx.) + Benefits
Overview
We are seeking a skilled Data Engineer to join our client's product engineering team, supporting Business Intelligence and Data Science initiatives primarily using AWS technologies. You will collaborate closely with their corporate technology team to build, automate, and maintain AWS infrastructure.
This role requires a highly competent, detail-oriented individual who stays current with evolving data engineering technologies.
Key Responsibilities
- Collaborate with data consumers, producers, and compliance teams to define requirements for data solutions with executive-level impact.
- Design, build, and maintain solutions for Business Intelligence and Data Science on AWS, including:
- Data ingestion pipelines
- ETL/ELT processes (batch and streaming)
- Curated data products
- Integrations with third-party tools
- Support and enhance the data lake, enterprise data catalog, cloud data warehouse, and data processing infrastructure.
- Provision and manage AWS services and infrastructure as code using Amazon CDK.
- Provide input on product/vendor selection, technology strategies, and architectural design.
- Identify and implement improvements to reduce waste, complexity, and redundancy.
- Manage workload efficiently to meet service levels and KPIs.
- Execute incident, problem, and change management processes as required.
Qualifications
- Degree in Information Systems, Computer Science, Statistics, or a related quantitative field.
- 3+ years experience with Spark in production environments, handling batch and stream processing jobs. Exposure to Ray is advantageous.
- 3+ years experience with cloud data warehousing tools such as Snowflake, Redshift, BigQuery, or ClickHouse.
- Expert SQL skills, with exposure to HiveQL.
- Proficiency in Java, Scala, Python, and Typescript programming.
- Strong understanding of AWS security mechanisms, particularly relating to S3, Kinesis, EMR, Glue, and LakeFormation.
- Experience with GitHub, DataDog, and AWS.
- Proven ability to learn and apply open-source tools independently.
- Strong ownership mindset and proactive approach.
Data Engineer
Posted today
Job Viewed
Job Description
We are seeking a highly skilled
Azure Data Engineer with strong experience in Dynamics 365
to join our team and support the delivery of our Dynamics 365 implementation. The ideal candidate will have extensive experience in Azure and Azure Data Factory and a strong background in integrating systems using various technologies and patterns. This role involves creating API interfaces using Azure Integration Services and integrating with Dynamics 365, leveraging Dataflows, ODATA, and other patterns.
Key Responsibilities:
- Design and implement scalable ETL/ELT pipelines using Azure Data Factory (ADF), including mapping data flows and parameterized pipelines.
- Integrate with Microsoft Dataverse and Dynamics 365 using ADF's native connectors, OData endpoints, and REST APIs.
- Collaborate with cross-functional teams to understand integration requirements and deliver robust solutions.
- Develop and optimise dataflows for transformation logic, including column reduction, lookups, and upsert strategies
- Implement delta load strategies and batch endpoints to manage large-scale data synchronisation efficiently
- Ensure data consistency and integrity across systems by leveraging unique identifiers and business keys.
- Collaborate with business analysts and architects to translate business requirements into technical solutions.
- Monitor, troubleshoot, and optimise pipeline performance using ADF's built-in diagnostics and logging tools.
- Contribute to data governance, security, and compliance by implementing best practices in access control and data handling.
- Collaborate on the development of new standards and practices to improve the quality, capability, and velocity of the team.
- Mentoring and coaching of junior team members new to Azure to ensure that the team builds up a strong capability
Requirements:
- ADF Expertise. Proficient in building pipelines, triggers, and mapping dataflows. Experience with parameterisation, error handling, and pipeline orchestration
- OData & API Integration. Strong understanding of the OData protocol. Experience integrating with REST APIs
- Dataverse & Dynamics 365. Hands-on experience with Dataverse schema, connector configuration, and data model mapping. Familiarity with D365 entity relationships
- Data Modelling. Understanding of conceptual, logical, and physical data models.
- SQL & Scripting. Strong SQL skills for data extraction and transformation
- Testing & Automation. Experience with end-to-end testing of data pipelines and automation of data validation processes
- Documentation & Collaboration. Ability to document data flows, transformation logic, and integration patterns. Strong communication skills for cross-functional collaboration.
- Azure Developer certification is highly desirable.
- Excellent problem-solving skills and the ability to work collaboratively in a team environment.
- Dynamics 365 experience is a must
Data Engineer
Posted today
Job Viewed
Job Description
Role Description
The Data Engineer is responsible for designing, building, and maintaining scalable data pipelines and architectures that enable efficient data collection, processing, and analysis. This role ensures that high-quality, reliable data is available to support business intelligence, analytics, and machine learning initiatives. The ideal candidate is technically strong, detail-oriented, and passionate about building robust data systems that transform raw data into actionable insights.
Key Responsibilities
- Design, develop, and optimize data pipelines, ETL/ELT processes, and workflows for structured and unstructured data.
- Build and maintain scalable data architectures that support data warehousing, analytics, and reporting needs.
- Integrate data from multiple sources such as APIs, databases, and third-party systems into centralized data platforms.
- Collaborate with data analysts, data scientists, and business teams to understand data requirements and ensure data accuracy and availability.
- Develop and enforce best practices for data governance, security, and quality assurance.
- Monitor, troubleshoot, and optimize data processes for performance and cost efficiency.
- Implement data validation, cleansing, and transformation procedures to maintain data integrity.
- Work with cloud platforms (e.g., AWS, Azure, GCP) to manage data storage, orchestration, and automation tools.
- Create and maintain documentation for data models, data flow diagrams, and pipeline configurations.
- Support the development of analytics and machine learning pipelines by providing clean and well-structured datasets.
- Collaborate with DevOps teams to deploy, scale, and maintain data infrastructure in production environments.
- Continuously improve data engineering practices through automation, monitoring, and innovation.
- Stay updated on emerging technologies and trends in data architecture, big data, and cloud computing.
Qualifications
- Bachelor's or Master's degree in Computer Science, Information Systems, Data Engineering, or a related field.
- 2–5 years of experience in data engineering, data warehousing, or database development.
- Strong proficiency in SQL and at least one programming language (Python, Java, or Scala preferred).
- Hands-on experience with ETL tools and frameworks (e.g., Apache Airflow, dbt, or Talend).
- Experience with big data technologies such as Spark, Hadoop, or Kafka.
- Familiarity with cloud-based data services (AWS Redshift, Google BigQuery, Azure Synapse, or Snowflake).
- Solid understanding of data modeling, schema design, and database management (relational and NoSQL).
- Knowledge of APIs, data integration, and data streaming methodologies.
- Strong problem-solving, analytical, and debugging skills.
- Excellent collaboration and communication abilities to work cross-functionally.
- Experience with containerization tools (Docker, Kubernetes) and CI/CD pipelines is a plus.
- Commitment to building efficient, scalable, and reliable data systems that support business growth.