Jobs
Interviews

250 Etl Pipelines Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 9.0 years

0 Lacs

chennai, tamil nadu

On-site

As a Consultant - Data Engineer at AstraZeneca, you will have the opportunity to contribute to the discovery, development, and commercialization of life-changing medicines by enhancing data platforms built on AWS services. Located at Chennai GITC, you will collaborate with experienced engineers to design and implement efficient data products, supporting data platform initiatives with a focus on impacting patients and saving lives. Your key accountabilities as a Data Engineer will include: Technical Expertise: - Designing, developing, and implementing scalable processes to extract, transform, and load data from various sources into data warehouses. - Demonstrating expert understanding of AstraZeneca's implementation of data products, managing SQL queries and procedures for optimal performance. - Providing support on production issues and enhancements through JIRA. Quality Engineering Standards: - Monitoring and optimizing data pipelines, troubleshooting issues, and maintaining quality standards in design, code, and data models. - Offering detailed analysis and documentation of processes and flows as needed. Collaboration: - Working closely with data engineers to understand data sources, transformations, and dependencies thoroughly. - Collaborating with cross-functional teams to ensure seamless data integration and reliability. Innovation and Process Improvement: - Driving the adoption of new technologies and tools to enhance data engineering processes and efficiency. - Recommending and implementing enhancements to improve reliability, efficiency, and quality of data processing pipelines. To be successful in this role, you should have: - A Bachelor's degree in Computer Science, Information Technology, or a related field. - Strong experience with SQL, warehousing, and building ETL pipelines. - Proficiency in working with column-level databases like Redshift, Cassandra, BigQuery. - Deep SQL knowledge for data extraction, transformation, and reporting. - Excellent communication skills for effective collaboration with technical and non-technical stakeholders. - Strong analytical skills to troubleshoot and deliver solutions in complex data environments. - Experience with Agile Development techniques and methodologies. Desirable skills and experience include knowledge of Databricks/Snowflake, proficiency in scripting and programming languages like Python, experience with reporting tools such as PowerBI, and prior experience in Pharmaceutical or Healthcare industry IT environments. Join AstraZeneca's dynamic team to drive cross-company change and disrupt the industry while making a direct impact on patients through innovative data solutions and technologies. Apply now to be part of our ambitious journey towards becoming a digital and data-led enterprise.,

Posted 1 day ago

Apply

13.0 - 17.0 years

0 Lacs

haryana

On-site

You will be joining our team as a Technical Support Analyst (Data-Focused), working remotely to support our growing Analytics/tech-support team. In this hybrid role, you will be responsible for bridging data analysis, technical investigation, and hands-on coding to resolve data issues, maintain script-based tools, and provide technical insights to our internal stakeholders. The ideal candidate should possess a strong analytical mindset, excellent Python and SQL skills, experience with Athena or large-scale datasets, and a keen attention to detail. Your primary responsibilities will include investigating and resolving data inconsistencies or anomalies reported by business or engineering teams, writing and debugging Python scripts for automation and data analysis, running SQL queries for data investigations, analyzing logs and data flows to troubleshoot technical issues, building and supporting internal tools for monitoring and reporting, and collaborating with product, data, and engineering teams to ensure data quality and integrity. Additionally, you will be expected to maintain detailed documentation of your findings, tools, and processes. To excel in this role, you should have strong Python skills for scripting and basic data processing, proficiency in SQL including joins and analytical queries, hands-on experience with AWS Athena or similar distributed query engines, general coding/debugging experience, familiarity with CLI/terminal environments and structured/unstructured data, a proven attention to detail, strong analytical and problem-solving skills, and technical curiosity. Preferred skills that would be a bonus include experience with AWS tools such as S3, Glue, Lambda, and CloudWatch, familiarity with Git and basic version control workflows, understanding of ETL pipelines, data validation, or schema management, exposure to dashboards like Metabase, Looker, or Tableau, ability to work with REST APIs for debugging or automation, and experience in cross-functional, collaborative teams. In terms of education and experience, a Bachelor's degree in a technical field (Computer Science, Engineering, Data Science) or equivalent practical experience is required, along with at least 3 years of experience in a technical analyst, data support, or operational engineering role. We are looking for a candidate with strong communication skills to explain findings to both technical and non-technical audiences, a team-first attitude willing to help others and adapt to changing priorities, a proactive, detail-oriented, and solution-driven approach, and the ability to multitask between support tickets, tooling, and analysis. Joining our team means being part of Affle, a global technology company focused on delivering consumer recommendations and conversions through relevant mobile advertising. Affle aims to enhance returns on marketing investment by reducing digital ad fraud and powering unique consumer journeys for marketers. Affle Holdings is the Singapore-based promoter for Affle 3i Limited, with investors including Microsoft and Bennett Coleman & Company (BCCL). If you are interested in being part of a dynamic team and making a difference in the mobile advertising industry, please visit our website at www.affle.com for more information.,

Posted 1 day ago

Apply

6.0 - 10.0 years

0 Lacs

indore, madhya pradesh

On-site

You should have 6-8 years of hands-on experience with Big Data technologies such as pySpark (Data frame and SparkSQL), Hadoop, and Hive. Additionally, you should possess good hands-on experience with python and Bash Scripts, along with a solid understanding of SQL and data warehouse concepts. Strong analytical, problem-solving, data analysis, and research skills are crucial for this role. It is essential to have a demonstrable ability to think creatively and independently, beyond relying solely on readily available tools. Excellent communication, presentation, and interpersonal skills are a must for effective collaboration within the team. Hands-on experience with Cloud Platform provided Big Data technologies like IAM, Glue, EMR, RedShift, S3, and Kinesis is required. Experience in orchestrating with Airflow and any job scheduler is highly beneficial. Familiarity with migrating workloads from on-premise to cloud and cloud to cloud migrations is also desired. In this role, you will be responsible for developing efficient ETL pipelines based on business requirements while adhering to development standards and best practices. Integration testing of different pipelines in AWS environment and providing estimates for development, testing, and deployments on various environments will be part of your responsibilities. Participation in code peer reviews to ensure compliance with best practices is essential. Creating cost-effective AWS pipelines using necessary AWS services like S3, IAM, Glue, EMR, Redshift, etc., is a key aspect of this position. Your experience should range from 6 to 8 years in relevant fields. The job reference number for this position is 13024.,

Posted 1 day ago

Apply

2.0 - 6.0 years

0 Lacs

delhi

On-site

As an ML Engineer, you will play a crucial role in developing and implementing various Machine Learning models that contribute to the growth of the company. Your primary focus will be on creating ML workflows tailored for the legal industry to enhance the efficiencies of in-house teams at Draft n Craft. This position offers you the opportunity to enhance your ML/Data Engineering skills while actively participating in the creation of valuable products. Your responsibilities will include developing data ingestion and preprocessing pipelines to extract actionable insights from legal data, ensuring accurate and consistent data cleaning for ML models, and experimenting with different ML models and architectures suitable for the legal industry. You will also be responsible for deploying ML models in production environments, analyzing model performance metrics, and improving model efficiencies. Additionally, you will document data preprocessing steps, model development processes, and optimization techniques. Collaboration with software engineers within the company will be essential, requiring an understanding of software development terminologies. Integration of ML models into existing software solutions and employee workflows will also be part of your role. The ideal candidate holds a degree in Computer Science Engineering, Data Science, or related fields. A minimum of 2 years of experience as an ML Engineer or Data Scientist is required, along with strong programming skills in Python and familiarity with libraries such as Tensorflow, PyTorch, and Pandas. Proficiency in SQL/NoSQL for database querying, working with data extraction and ETL pipelines, and experience in NLP, Neural Networks, and Gen AI technologies are essential. Familiarity with transfer learning on LLM models and deploying ML models to cloud services like AWS/Azure is preferred. Some experience in software development will be advantageous. This is a full-time position with a preferred total work experience of 6 years. The work location is in person.,

Posted 2 days ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

You will be joining Apexon, a digital-first technology services firm that specializes in accelerating business transformation and delivering human-centric digital experiences. At Apexon, we meet customers at every stage of the digital lifecycle and help them outperform their competition through speed and innovation. With a focus on AI, analytics, app development, cloud, commerce, CX, data, DevOps, IoT, mobile, quality engineering, and UX, we leverage our deep expertise in BFSI, healthcare, and life sciences to help businesses capitalize on the opportunities presented by the digital world. Our reputation is built on a comprehensive suite of engineering services, a commitment to solving our clients" toughest technology problems, and a dedication to continuous improvement. With backing from Goldman Sachs Asset Management and Everstone Capital, Apexon has a global presence with 15 offices and 10 delivery centers across four continents. As a part of our #HumanFirstDIGITAL initiative, you will be expected to excel in data analysis, VBA, Macros, and Excel. Your responsibilities will include monitoring and supporting healthcare operations, addressing client queries, and effectively communicating with stakeholders. Proficiency in Python scripting, particularly in pandas, numpy, and ETL pipelines, is essential. You should be able to independently understand client requirements and queries and demonstrate strong skills in data analysis. Knowledge of Azure synapse basics, Azure DevOps basics, Git, T-SQL experience, and Sql Server will be beneficial. At Apexon, we are committed to diversity and inclusion, and our benefits and rewards program is designed to recognize your skills and contributions, enhance your learning and upskilling experience, and provide support for you and your family. As an Apexon Associate, you will have access to continuous skill-based development, opportunities for career growth, comprehensive health and well-being benefits, and support. In addition to a supportive work environment, we offer a range of benefits, including group health insurance covering a family of 4, term insurance, accident insurance, paid holidays, earned leaves, paid parental leave, learning and career development opportunities, and employee wellness programs.,

Posted 2 days ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

As an AWS Data Engineer, you should have at least 3 years of experience in AWS Data Engineering. Your main responsibilities will include designing and building ETL pipelines and Data lakes to automate the ingestion of both structured and unstructured data. You will need to be proficient in working with AWS big data technologies such as Redshift, S3, AWS Glue, Kinesis, Athena, DMS, EMR, and Lambda for Serverless ETL processes. Knowledge in SQL and NoSQL programming languages is essential, along with experience in batch and real-time pipelines. Your role will require excellent programming and debugging skills in either Scala or Python, as well as expertise in Spark. You should have a good understanding of Data Lake formation, Apache Spark, Python, and hands-on experience in deploying models. Experience in Production migration processes is a must, and it would be advantageous to have familiarity with Power BI visualization tools and connectivity. In this position, you will be tasked with designing, building, and operationalizing large-scale enterprise data solutions and applications. You will also need to analyze, re-architect, and re-platform on-premise data warehouses to data platforms within the AWS cloud environment. Creating production data pipelines from ingestion to consumption using Python or Scala within the AWS big data architecture will be part of your routine. Additionally, you will be responsible for conducting detailed assessments of current state data platforms and developing suitable transition paths to the AWS cloud. If you possess strong data engineering skills and are looking for a challenging role in AWS Data Engineering, this opportunity may be the right fit for you.,

Posted 2 days ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

You are a Generative AI Software Engineer with over 3 years of experience, possessing strong expertise in Generative AI and Large Language Models (LLMs). Your role involves designing, developing, and implementing GenAI solutions customized to address various business challenges. It is crucial that you excel in Python programming and have hands-on experience in Retrieval-Augmented Generation (RAG) along with familiarity with Vector Databases. Your responsibilities include using Python to design and implement scalable GenAI solutions, leading the design and architecture of AI-driven solutions, working with Large Language Models (LLMs) for building customized solutions, and developing AI models that handle both text and speech data. You will also apply similarity search algorithms in vector space, leverage NLP techniques and LLMs to build conversational chatbots, and utilize vector databases for efficient data management in AI applications. Additionally, you will implement Retrieval-Augmented Generation (RAG) pipelines, develop ETL pipelines for data flow and preparation, conduct experiments and evaluations for optimal AI model performance, and ensure compliance with data privacy, security, and ethical standards. Your role also involves contributing to front-end development using React.js to build user interfaces for AI applications. It is essential that you are open to working in an agile and flexible environment, possess excellent presentation skills to engage with business users, and provide expert solutions to project issues for optimal service delivery and client satisfaction. Your willingness to learn new skills and technologies will be beneficial in this role.,

Posted 2 days ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

As a member of the Providence Cybersecurity (CYBR) team, you will play a crucial role in safeguarding all information pertaining to caregivers, affiliates, and confidential business data. Your responsibilities will include collaborating with Product Management to assess use cases, functional requirements, and technical specifications. You will conduct data discovery and analysis to identify crucial data from source systems for meeting business needs. Additionally, you will be tasked with developing conceptual and logical data models to validate requirements, highlighting essential entities, relationships, and documenting assumptions and risks. Your role will also involve translating logical data models into physical data models, creating source-to-target mapping documentation, and defining transformation rules. Supporting engineering teams in implementing physical data models, applying transformation rules, and ensuring compliance with data governance, security frameworks, and encryption mechanisms in cloud environments will be a key part of your responsibilities. Furthermore, you will lead a team of data engineers in designing, developing, and implementing cloud-based data solutions using Azure Databricks and Azure native services. The ideal candidate for this role will possess a Bachelor's degree in a related field such as computer science, along with certifications in Data Engineering, cyber security, or equivalent experience. Experience in working with large and complex data environments, expertise in data integration patterns and tools, and a solid understanding of cloud computing concepts and distributed computing principles are essential. Proficiency in Databricks, Azure Data Factory (ETL Pipelines), and Medallion Architecture, along with hands-on experience in designing and implementing data solutions using Azure cloud services, is required. Strong skills in SQL, Python, Spark, data modelling techniques, dimensional modelling, and data warehousing concepts are crucial for this role. Relevant certifications such as Microsoft Certified: Azure Solutions Architect Expert or Microsoft Certified: Azure Data Engineer Associate are highly desirable. Excellent problem-solving, analytical, leadership, and communication skills are essential for effectively communicating technical concepts and strategies to stakeholders at all levels. You should also demonstrate the ability to lead cross-functional teams, drive consensus, and achieve project goals in a dynamic and fast-paced environment.,

Posted 2 days ago

Apply

2.0 - 6.0 years

0 Lacs

indore, madhya pradesh

On-site

Golden Eagle IT Technologies Pvt. Ltd. is looking for a skilled Data Engineer with 2 to 4 years of experience to join the team in Indore. The ideal candidate should have a solid background in data engineering, big data technologies, and cloud platforms. As a Data Engineer, you will be responsible for designing, building, and maintaining efficient, scalable, and reliable data pipelines. You will be expected to develop and maintain ETL pipelines using tools like Apache Airflow, Spark, and Hadoop. Additionally, you will design and implement data solutions on AWS, leveraging services such as DynamoDB, Athena, Glue Data Catalog, and SageMaker. Working with messaging systems like Kafka for managing data streaming and real-time data processing will also be part of your responsibilities. Proficiency in Python and Scala for data processing, transformation, and automation is essential. Ensuring data quality and integrity across multiple sources and formats will be a key aspect of your role. Collaboration with data scientists, analysts, and other stakeholders to understand data needs and deliver solutions is crucial. Optimizing and tuning data systems for performance and scalability, as well as implementing best practices for data security and compliance, are also expected. Preferred skills include experience with infrastructure as code tools like Pulumi, familiarity with GraphQL for API development, and exposure to machine learning and data science workflows, particularly using SageMaker. Qualifications for this position include a Bachelor's degree in Computer Science, Information Technology, or a related field, along with 2-4 years of experience in data engineering or a similar role. Proficiency in AWS cloud services and big data technologies, strong programming skills in Python and Scala, knowledge of data warehousing concepts and tools, as well as excellent problem-solving and communication skills are required.,

Posted 2 days ago

Apply

8.0 - 10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About the Role Were building a next-generation Data Analytics Engine that transforms raw market and historical data into actionable intelligence for the electronics supply chain industry. Our platform ingests, processes, and analyzes high-volume data across suppliers, parts, and trends powering real-time insights and ML-driven applications. We are looking for a highly experienced Lead or Staff Data Engineer to help us shape and scale our core data infrastructure. The ideal candidate brings a proven track record of designing and delivering scalable ETL pipelines and real-time data systems in AWS and open-source environments (e.g., Airflow, Spark, Kafka). This role requires deep technical ownership and leadership to enhance our architecture, implement best practices, and guide junior engineers. Key Responsibilities Design, implement, and optimize scalable ETL pipelines for billions of records using AWS-native tools (Glue, Step Functions, S3, PostgreSQL). Migrate and evolve existing pipelines toward open-source orchestration tools such as Apache Airflow and Kafka . Lead data lake and data warehouse architecture design for analytics and ML workflows. Own and maintain CI/CD workflows ensuring reliability, observability, and deployment automation. Implement data validation, schema evolution, anomaly detection, and data quality checks . Contribute to Infrastructure as Code (Terraform, AWS CDK) for reproducible deployments. Provide technical mentorship , make architectural decisions, and establish best practices. Required Qualifications 8+ years of experience as a Data Engineer or similar role with production ownership. Expertise in AWS tools : Glue, Step Functions, S3, Athena, Lambda. Deep knowledge of open-source data stack : Apache Airflow, Spark, Kafka, dbt. Strong Python programming skills for modular workflows and automation. Expert-level SQL and experience with PostgreSQL (or similar databases). Experience with CI/CD tools (GitHub Actions, Jenkins, etc.). Familiarity with Infrastructure as Code : Terraform, AWS CDK, or CloudFormation. Ability to mentor engineers and lead architectural decisions . Preferred Qualifications Background in ML/AI pipelines or feature engineering . Experience with serverless technologies or containerized deployments (Docker, Lambda). Familiarity with data observability tools and alerting systems . Education Bachelors or Masters degree in Computer Science, Data Engineering, Information Systems, or a related technical field (or equivalent practical experience). What We Offer Opportunity to work on impactful, real-world problems in supply chain intelligence. Direct mentorship from experienced engineers and AI product leads. A flexible, startup-friendly environment where your contributions matter. Competitive compensation and opportunities for career growth. Show more Show less

Posted 2 days ago

Apply

4.0 - 9.0 years

4 - 9 Lacs

Chennai

Work from Office

We are seeking a Business Analyst specialized in Revenue Cycle Management (RCM) analytics, data quality, and business decision support. This role is critical in transforming raw healthcare financial data into clean, consistent, and decision-driving analytics . Key Responsibilities: Data Quality & Dashboard Accuracy Own the accuracy and consistency of operational and financial dashboards (Power BI / Tableau / Excel). Perform root cause analysis of data mismatches, anomalies, and KPI discrepancies . Define, document, and enforce KPI calculation standards across departments and systems. Work closely with technical teams to manage ETL pipelines and data transformations for RCM reporting. Develop and execute data validation checks, reconciliations, and quality assurance scripts (SQL / Python preferred). Collaborate with internal stakeholders to ensure source-to-dashboard data alignment and integrity . Business Analysis & Decision Support Analyze claims, payments, denials, AR, collections, and cash flow data to identify trends and improvement areas. Provide executive-level insights to support decisions on staffing, process changes, financial forecasting, and revenue growth. Work with leadership to track KPIs, monitor RCM health, and proactively recommend corrective actions . Develop predictive models and forecasting tools for revenue cycle performance (optional, preferred skill). Identify manual process bottlenecks and recommend automation opportunities (RPA, workflow redesign). Partner with clinical and billing teams to translate operational needs into meaningful data reports . Monitor payer behavior , denials, underpayments, and write-offs to support strategy adjustments. Primarily US hours support (Late Shift / Hybrid Shift as needed) Ms. Farjana Shajahan - farjanas@billedright.com- 8148794767 If you are interested in the job, kindly send your resume to the above-mentioned email. Billed Right does not discriminate on the basis of race, sex, color, religion, age, national origin, marital status, disability, veteran status, genetic information, sexual orientation, gender identity, or any other reason prohibited by law in the provision of employment opportunities and benefits. You can apply for other job opportunities at the below link https://billedright.zohorecruit.in/jobs/Careers

Posted 2 days ago

Apply

2.0 - 6.0 years

2 - 6 Lacs

Mumbai, Maharashtra, India

On-site

Manage ETL pipelines, data engineering operations, and cloud infrastructure Experience in configuring data exchange and transfer methods Experience in orchestrating ETL pipelines with multiple tasks, triggers, and dependencies Strong proficiency with Python and Apache Spark; intermediate or better proficiency with SQL; experience with AWS S3 and EC2, Databricks Ability to communicate efficiently and translate ideas with technical stakeholders in IT and Data Science Passionate about designing data infrastructure and eager to contribute ideas to help build robust data platforms

Posted 2 days ago

Apply

5.0 - 10.0 years

25 - 40 Lacs

Gurugram

Work from Office

Job Title: Data Engineer Job Type: Full-time Department: Data Engineering / Data Science Reports To: Data Engineering Manager / Chief Data Officer About the Role: We are looking for a talented Data Engineer to join our team. As a Data Engineer, you will be responsible for designing, building, and maintaining robust data pipelines and systems that process and store large volumes of data. You will collaborate closely with data scientists, analysts, and business stakeholders to deliver high-quality, actionable data solutions. This role requires a strong background in data engineering, database technologies, and cloud platforms, along with the ability to work in an Agile environment to drive data initiatives forward. Responsibilities: Design, build, and maintain scalable and efficient data pipelines that move, transform, and store large datasets. Develop and optimize ETL processes using tools such as Apache Spark , Apache Kafka , or AWS Glue . Work with SQL and NoSQL databases to ensure the availability, consistency, and reliability of data. Collaborate with data scientists and analysts to ensure data requirements and quality standards are met. Design and implement data models, schemas, and architectures for data lakes and data warehouses. Automate manual data processes to improve efficiency and data processing speed. Ensure data security, privacy, and compliance with industry standards and regulations. Continuously evaluate and integrate new tools and technologies to enhance data engineering processes. Troubleshoot and resolve data quality and performance issues. Participate in code reviews and contribute to a culture of best practices in data engineering. Requirements: 3-10 years of experience as a Data Engineer or in a similar role. Strong proficiency in SQL and experience with NoSQL databases (e.g., MongoDB, Cassandra). Experience with big data technologies such as Apache Hadoop , Spark , Hive , and Kafka . Hands-on experience with cloud platforms like AWS , Azure , or Google Cloud . Proficiency in Python , Java , or Scala for data processing and scripting. Familiarity with data warehousing concepts, tools, and technologies (e.g., Snowflake , Redshift , BigQuery ). Experience working with data modeling, data lakes, and data pipelines. Solid understanding of data governance, data privacy, and security best practices. Strong problem-solving and debugging skills. Ability to work in an Agile development environment. Excellent communication skills and the ability to work cross-functionally.

Posted 2 days ago

Apply

6.0 - 10.0 years

0 Lacs

karnataka

On-site

As a Data Engineer at our company, you will be responsible for designing and implementing Azure Synapse Analytics solutions for data processing and reporting. Your role will involve optimizing ETL pipelines, SQL pools, and Synapse Spark workloads to ensure efficient data processing. It will also be crucial for you to uphold data quality, security, and governance best practices while collaborating with business stakeholders to develop data-driven solutions. Additionally, part of your responsibilities will include mentoring a team of data engineers. To excel in this role, you should have 6-10 years of experience in Data Engineering, BI, or Cloud Analytics. Your expertise in Azure Synapse, Azure Data Factory, SQL, and ETL processes will be essential. Experience with Fabric is strongly desirable, and possessing strong leadership, problem-solving, and stakeholder management skills is crucial. Knowledge of Power BI, Python, or Spark would be a plus. You should also have deep knowledge of Data Modelling techniques, Design and development of ETL Pipelines, Azure Resources Cost Management, and proficiency in writing complex SQL queries. Furthermore, you are expected to have knowledge and experience in Master Data/metadata management, including Data Governance, Data Quality, Data Catalog, and Data Security. Your ability to manage a complex and rapidly evolving business, actively lead, develop, and support team members will be key. As an Agile practitioner and advocate, you must be highly dynamic in your approach, adapting to constant changes in risks and forecasts. Your role will involve ensuring data integrity within the dimensional model by validating data and identifying inconsistencies. You will also work closely with Product Owners and data engineers to translate business needs into effective dimensional models. This position offers the opportunity to lead AI-driven data integration projects in real estate technology, work in a collaborative and innovative environment with global teams, and receive competitive compensation, career growth opportunities, and exposure to cutting-edge technologies. Ideally, you should hold a Bachelors/masters degree in software engineering, Computer Science, or a related area. Our company offers a range of benefits, including hybrid working arrangements, an annual performance-related bonus, Flexi any days, medical insurance coverage for extended family members, and an engaging, fun, and inclusive culture. MRI Software is dedicated to delivering innovative applications and hosted solutions that empower real estate companies to elevate their business. With a strong focus on meeting the unique needs of real estate businesses globally, we have grown to include offices across various countries with over 4000 team members supporting our clients. MRI is proud to be an Equal Employment Opportunity employer.,

Posted 3 days ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

En tant que Data Product Owner Senior, vous serez responsable de la prparation, de la coordination et du suivi de la ralisation de projets axs sur la data et l'intelligence artificielle. Votre rle consistera assurer la conception et la livraison de solutions innovantes et axes sur les donnes, en collaboration avec les quipes techniques, les quipes mtier et les clients. Vos principales responsabilits incluront le cadrage des besoins en matire de data et IA, la dfinition des spcifications fonctionnelles, le suivi de la conception et du dveloppement, la gestion de projet agile et l'assurance de la qualit et des performances des solutions livres. Vous serez galement le point de contact privilgi des clients pour leurs projets data/IA, garantissant un alignement stratgique entre leurs objectifs et les solutions proposes. Le profil idal pour ce poste comprend un diplme en ingnierie, informatique ou dans un domaine li la data/IA, avec au moins 5 ans d'exprience dans la gestion de projets data ou IA, de prfrence dans un environnement Agile. Vous devriez avoir une expertise en data et IA, une bonne comprhension des outils et concepts associs, ainsi que des comptences en gestion produit et en communication. La matrise de l'anglais professionnel est galement requise pour interagir avec des clients et des quipes internationaux. En rejoignant EY FABERNOVEL, vous aurez l'opportunit de travailler sur des projets d'envergure, de bnficier d'un accompagnement dans votre carrire, d'avantages attrayants tels qu'un accs des offres privilgies, une prise en charge des repas, des possibilits de tltravail et de remboursement des transports, ainsi qu'un environnement de travail stimulant et propice l'apprentissage continu.,

Posted 3 days ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

You are a Gen AI Senior Software Engineer with over 5 years of experience, specializing in Generative AI and Large Language Models (LLMs). Your main responsibility is to design, develop, and implement GenAI solutions tailored to address business challenges, utilizing your expertise in Python for AI/ML solutions. Your key technical skills include proficiency in Python programming, hands-on experience in Retrieval-Augmented Generation (RAG) and familiarity with Vector Databases. It would be beneficial if you have experience in React.js and are open to learning new skills and technologies. In this role, you will be responsible for designing and implementing scalable GenAI solutions using Python, leading the architecture of AI-driven solutions, and developing customized solutions using Large Language Models (LLMs) for various applications. You will also create AI models for text and speech data, optimize AI model performance using similarity search algorithms, and build conversational chatbots using advanced NLP techniques. Furthermore, you will utilize vector databases to store and manage data for AI applications, implement Retrieval-Augmented Generation (RAG) pipelines, develop and maintain ETL pipelines for efficient data flow, and conduct experiments to evaluate AI model performance. You will also ensure that AI solutions comply with data privacy, security, and ethical standards while contributing to front-end development using React.js to create user-friendly interfaces for AI applications. Soft skills required for this role include adaptability to work in an agile environment, excellent presentation skills to engage with business users, and the ability to provide expert advice to resolve project issues and ensure client satisfaction.,

Posted 3 days ago

Apply

5.0 - 10.0 years

0 Lacs

hyderabad, telangana

On-site

You are an experienced Azure Databricks Engineer who will be responsible for designing, developing, and maintaining scalable data pipelines and supporting data infrastructure in an Azure cloud environment. Your key responsibilities will include designing ETL pipelines using Azure Databricks, building robust data architectures on Azure, collaborating with stakeholders to define data requirements, optimizing data pipelines for performance and reliability, implementing data transformations and cleansing processes, managing Databricks clusters, and leveraging Azure services for data orchestration and storage. You must possess 5-10 years of experience in data engineering or a related field with extensive hands-on experience in Azure Databricks and Apache Spark. Strong knowledge of Azure cloud services such as Azure Data Lake, Data Factory, Azure SQL, and Azure Synapse Analytics is required. Experience with Python, Scala, or SQL for data manipulation, ETL frameworks, Delta Lake, Parquet formats, Azure DevOps, CI/CD pipelines, big data architecture, and distributed systems is essential. Knowledge of data modeling, performance tuning, and optimization of big data solutions is expected, along with problem-solving skills and the ability to work in a collaborative environment. Preferred qualifications include experience with real-time data streaming tools, Azure certifications, machine learning frameworks, integration with Databricks, and data visualization tools like Power BI. A bachelor's degree in Computer Science, Data Engineering, Information Technology, or a related field is required for this role.,

Posted 3 days ago

Apply

5.0 - 10.0 years

25 - 40 Lacs

Bengaluru

Work from Office

Job Title: Data Engineer Job Type: Full-time Department: Data Engineering / Data Science Reports To: Data Engineering Manager / Chief Data Officer About the Role: We are looking for a talented Data Engineer to join our team. As a Data Engineer, you will be responsible for designing, building, and maintaining robust data pipelines and systems that process and store large volumes of data. You will collaborate closely with data scientists, analysts, and business stakeholders to deliver high-quality, actionable data solutions. This role requires a strong background in data engineering, database technologies, and cloud platforms, along with the ability to work in an Agile environment to drive data initiatives forward. Responsibilities: Design, build, and maintain scalable and efficient data pipelines that move, transform, and store large datasets. Develop and optimize ETL processes using tools such as Apache Spark , Apache Kafka , or AWS Glue . Work with SQL and NoSQL databases to ensure the availability, consistency, and reliability of data. Collaborate with data scientists and analysts to ensure data requirements and quality standards are met. Design and implement data models, schemas, and architectures for data lakes and data warehouses. Automate manual data processes to improve efficiency and data processing speed. Ensure data security, privacy, and compliance with industry standards and regulations. Continuously evaluate and integrate new tools and technologies to enhance data engineering processes. Troubleshoot and resolve data quality and performance issues. Participate in code reviews and contribute to a culture of best practices in data engineering. Requirements: 3-10 years of experience as a Data Engineer or in a similar role. Strong proficiency in SQL and experience with NoSQL databases (e.g., MongoDB, Cassandra). Experience with big data technologies such as Apache Hadoop , Spark , Hive , and Kafka . Hands-on experience with cloud platforms like AWS , Azure , or Google Cloud . Proficiency in Python , Java , or Scala for data processing and scripting. Familiarity with data warehousing concepts, tools, and technologies (e.g., Snowflake , Redshift , BigQuery ). Experience working with data modeling, data lakes, and data pipelines. Solid understanding of data governance, data privacy, and security best practices. Strong problem-solving and debugging skills. Ability to work in an Agile development environment. Excellent communication skills and the ability to work cross-functionally.

Posted 3 days ago

Apply

5.0 - 10.0 years

25 - 40 Lacs

Pune

Work from Office

Job Title: Data Engineer Job Type: Full-time Department: Data Engineering / Data Science Reports To: Data Engineering Manager / Chief Data Officer About the Role: We are looking for a talented Data Engineer to join our team. As a Data Engineer, you will be responsible for designing, building, and maintaining robust data pipelines and systems that process and store large volumes of data. You will collaborate closely with data scientists, analysts, and business stakeholders to deliver high-quality, actionable data solutions. This role requires a strong background in data engineering, database technologies, and cloud platforms, along with the ability to work in an Agile environment to drive data initiatives forward. Responsibilities: Design, build, and maintain scalable and efficient data pipelines that move, transform, and store large datasets. Develop and optimize ETL processes using tools such as Apache Spark , Apache Kafka , or AWS Glue . Work with SQL and NoSQL databases to ensure the availability, consistency, and reliability of data. Collaborate with data scientists and analysts to ensure data requirements and quality standards are met. Design and implement data models, schemas, and architectures for data lakes and data warehouses. Automate manual data processes to improve efficiency and data processing speed. Ensure data security, privacy, and compliance with industry standards and regulations. Continuously evaluate and integrate new tools and technologies to enhance data engineering processes. Troubleshoot and resolve data quality and performance issues. Participate in code reviews and contribute to a culture of best practices in data engineering. Requirements: 3-10 years of experience as a Data Engineer or in a similar role. Strong proficiency in SQL and experience with NoSQL databases (e.g., MongoDB, Cassandra). Experience with big data technologies such as Apache Hadoop , Spark , Hive , and Kafka . Hands-on experience with cloud platforms like AWS , Azure , or Google Cloud . Proficiency in Python , Java , or Scala for data processing and scripting. Familiarity with data warehousing concepts, tools, and technologies (e.g., Snowflake , Redshift , BigQuery ). Experience working with data modeling, data lakes, and data pipelines. Solid understanding of data governance, data privacy, and security best practices. Strong problem-solving and debugging skills. Ability to work in an Agile development environment. Excellent communication skills and the ability to work cross-functionally.

Posted 3 days ago

Apply

5.0 - 10.0 years

25 - 40 Lacs

Noida

Work from Office

Job Title: Data Engineer Job Type: Full-time Department: Data Engineering / Data Science Reports To: Data Engineering Manager / Chief Data Officer About the Role: We are looking for a talented Data Engineer to join our team. As a Data Engineer, you will be responsible for designing, building, and maintaining robust data pipelines and systems that process and store large volumes of data. You will collaborate closely with data scientists, analysts, and business stakeholders to deliver high-quality, actionable data solutions. This role requires a strong background in data engineering, database technologies, and cloud platforms, along with the ability to work in an Agile environment to drive data initiatives forward. Responsibilities: Design, build, and maintain scalable and efficient data pipelines that move, transform, and store large datasets. Develop and optimize ETL processes using tools such as Apache Spark , Apache Kafka , or AWS Glue . Work with SQL and NoSQL databases to ensure the availability, consistency, and reliability of data. Collaborate with data scientists and analysts to ensure data requirements and quality standards are met. Design and implement data models, schemas, and architectures for data lakes and data warehouses. Automate manual data processes to improve efficiency and data processing speed. Ensure data security, privacy, and compliance with industry standards and regulations. Continuously evaluate and integrate new tools and technologies to enhance data engineering processes. Troubleshoot and resolve data quality and performance issues. Participate in code reviews and contribute to a culture of best practices in data engineering. Requirements: 3-10 years of experience as a Data Engineer or in a similar role. Strong proficiency in SQL and experience with NoSQL databases (e.g., MongoDB, Cassandra). Experience with big data technologies such as Apache Hadoop , Spark , Hive , and Kafka . Hands-on experience with cloud platforms like AWS , Azure , or Google Cloud . Proficiency in Python , Java , or Scala for data processing and scripting. Familiarity with data warehousing concepts, tools, and technologies (e.g., Snowflake , Redshift , BigQuery ). Experience working with data modeling, data lakes, and data pipelines. Solid understanding of data governance, data privacy, and security best practices. Strong problem-solving and debugging skills. Ability to work in an Agile development environment. Excellent communication skills and the ability to work cross-functionally.

Posted 3 days ago

Apply

4.0 - 9.0 years

8 - 18 Lacs

Chennai, Coimbatore, Vellore

Work from Office

We at Blackstraw.ai. are organizing a Walk-in Interview Drive for Data Engineers with minimum 3 years exp in Data Engineer Data Engineer Mini 3 Years Exp in Python, Spark, PySpark, Hadoop, Hive, Snowflake , AWS , Databricks We are looking for a Data Engineer to join our team. You will use various methods to transform raw data into useful data systems. You'll strive for efficiency by aligning data systems with business goals. To succeed in this position, you should have strong analytical skills and the ability to combine data from different sources. Data engineer skills also include familiarity with several programming languages and an understanding of machine learning methods. If you are detail-oriented, with excellent organizational skills and experience in this field, wed like to hear from you. Job Requirements Participate in the customer's system design meetings and collect the functional/technical requirements. Responsible to meet the customer expectations on real-time data integrity and implementing efficient solutions A clear understanding of Python, Spark, PySpark, Hive, Kafka, and RDBMS architecture. Experience in writing Spark/Python programs and SQL queries. Suggest and implement best practices in data integration. Guide the QA team in defining system integration tests as needed. Split the planned deliverables into tasks and assign them to the team. Good to have: Knowledge of CI/CD concepts, Apache Kafka Key traits: Should have excellent communication skills. Should be self-motivated and willing to work as part of a team. Should be able to collaborate and coordinate in a remote environment. Be a problem solver and be proactive to solve the challenges that come his way. Important Instructions: Do carry a hard copy of your resume, one passport photograph, along with a government identity proof for ease of access to our premises. *Please note: Do not carry any electronic devices apart from your mobile phone at office premises.* Please send us your resume to chennai.walkin@blackstraw.ai *Kindly fill up below form to submit you registration form: https://forms.gle/LtNYvGM8pbxMifXw6 Preference will be given for Immediate Joiners or who can join within 10-15 days.

Posted 3 days ago

Apply

6.0 - 11.0 years

15 - 20 Lacs

Pune

Work from Office

Role & responsibilities Key Responsibilities: Lead Data Engineering Projects: Oversee the development and deployment of end-to-end data ingestion pipelines using Azure Databricks, Apache Spark, and related technologies, ensuring scalability, performance, and efficiency. Design & Architecture: Design high-performance, resilient, and scalable data architectures for data ingestion and processing using best practices for Azure Databricks and Spark. Team Leadership: Provide technical guidance and mentorship to a team of data engineers, fostering a culture of collaboration, continuous learning, and innovation. Collaboration: Work closely with data scientists, business analysts, and other stakeholders to understand data requirements and ensure smooth integration of various data sources into the data lake/warehouse. Optimization & Performance Tuning: Ensure data pipelines are optimized for speed, reliability, and cost efficiency in an Azure environment. Conduct performance tuning, troubleshooting, and debugging of Spark jobs and Databricks clusters. Code Quality & Best Practices: Enforce and advocate for best practices in coding standards, version control, testing, and documentation. Integration with Azure Services: Work with other Azure services such as Azure Data Lake Storage, Azure SQL Data Warehouse, Azure Synapse Analytics, and Azure Blob Storage to integrate data seamlessly. Continuous Improvement: Stay current with industry trends and emerging technologies in the field of data engineering and make recommendations for improvements to the teams tools and processes. Ensure Data Quality: Implement data validation and data quality checks as part of the data ingestion process to ensure consistency, accuracy, and integrity of ingested data. Risk & Issue Management: Proactively identify risks and blockers and resolve complex technical issues in a timely and effective manner. Qualifications: Education: Bachelors or masters degree in computer science, Information Technology, Engineering, or a related field. Experience: 8+ years of experience in data engineering or a related field. Strong hands-on experience with Azure Databricks , Spark , Python/Scala, CICD, Scripting for data processing. Experience working in multiple file formats like Parquet , Delta , and Iceberg . Knowledge of Kafka or similar streaming technologies for real-time data ingestion. Experience with data governance and data security in Azure. Proven track record of building large-scale data ingestion and ETL pipelines in cloud environments, specifically Azure. Deep understanding of Azure Data Services (e.g., Azure Blob Storage, Azure Data Lake, Azure SQL Data Warehouse, Event Hubs, Functions etc.). Familiarity with data lakes , data warehouses , and modern data architectures. Experience with CI/CD pipelines , version control (Git), Jenkins and agile methodologies. Understanding of cloud infrastructure and architecture principles (especially within Azure ). Technical Skills: Expert-level proficiency in Spark, SPARK Streaming , including optimization, debugging, and troubleshooting Spark jobs. Solid knowledge of Azure Databricks for scalable, distributed data processing. Strong coding skills in Python and Scala for data processing. Experience working with SQL , especially for large datasets. Knowledge of data formats like Iceberg , Parquet , ORC , and Delta Lake . Leadership Skills: Proven ability to lead and mentor a team of data engineers, ensuring adherence to best practices. Excellent communication skills, capable of interacting with both technical and non-technical stakeholders. Strong problem-solving, analytical, and troubleshooting abilities.

Posted 3 days ago

Apply

7.0 - 11.0 years

15 - 25 Lacs

Hyderabad

Hybrid

Role Purpose: The Senior Data Engineer will support and enable the Data Architecture and the Data Strategy. Supporting solution architecture and engineering for data ingestion and modelling challenges. The role will support the deduplication of enterprise data tools, working with the Lonza Data Governance Board, Digital Council and IT to drive towards a single Data and Information Architecture. This will be a hands-on engineering role with a focus on business and digital transformation. The role will be responsible for managing and maintain the Data Architecture and solutions that deliver the platform at with operational support and troubleshooting. The Senior Data Engineer will also manage (no reporting line changes but from day-to-day delivery) and coordinate the Data Engineering team members (Internal and External) working on the various project implementations. Experience : 7-10 years experience with digital transformation and data projects. Experience in designing, delivering and managing data infrastructures. Proficiency in using Cloud Services (Azure) for data engineering, storage and analytics. Strong SQL and NoSQL experience Data Modelling Hands on developing pipelines, setting-up architectures in Azure Fabric. Team management experience (internal and external resources). Good understanding of data warehousing, data virtualization and analytics. Experience in working with data analysts, data scientists and BI teams to deliver on data requirements. Data Catalogue experience is a plus. ETL Pipeline Design is a plus Python Development skills is a plus Realtime data ingestion (E.g. Kafka) Licenses or Certifications Beneficial; ITIL, PM, CSM, Six Sigma, Lean Knowledge Good understanding about integration, ETL, API and Data sharing concepts. Understanding / Awareness of Visualization tools is a plus Knowledge and understanding of relevant legal and regulatory requirements, such as CFR 21 part 11, EU General Data Protection Regulation, Health Insurance Portability and Accountability Act (HIPAA) and GxP validation process would be a plus. Skills The position requires a pragmatic leader with sound knowledge of data, integration and analytics. Excellent written and verbal communication skills, interpersonal and collaborative skills, and the ability to communicate technical concepts to nontechnical audiences. Exhibit excellent analytical skills, the ability to manage and contribute to multiple projects under strict timelines, as well as the ability to work well in a demanding, dynamic environment and meet overall objectives. Project management skills: scheduling and resource management are a plus. Ability to motivate cross-functional, interdisciplinary teams to achieve tactical and strategic goals. Data Catalogue Project and Team management skills are plus. Strong SAP skills are a plus.

Posted 3 days ago

Apply

8.0 - 12.0 years

0 Lacs

chennai, tamil nadu

On-site

As an experienced Data & Analytics Project Manager, you will play a crucial role in leading end-to-end execution of data and analytics projects. Your expertise in data integration, analytics, and cloud platforms such as AWS and Azure will be essential in ensuring seamless delivery. Collaborating with cross-functional teams, driving innovation, and optimizing data-driven decision-making will be key responsibilities in this role. Our projects utilize a variety of technologies including internal custom-built solutions, packaged software, ERP solutions, data warehouses, Software as a Service, cloud-based solutions, and BI tools. You will be responsible for leading project teams from initiation to close, delivering effective solutions that meet approved customer and business needs. Your accountability will lie in determining and delivering solutions within budget and schedule commitments while maintaining required quality and compliance standards. Your main focus will be on leading end-to-end project management for Data Engineering & Analytics initiatives. This will involve understanding and managing data pipeline development, DWH design, and BI reporting needs at a high level. Collaborating with technical teams on Snowflake-based solutions, ETL pipelines, and data modeling concepts will be crucial. Overseeing project timelines, risks, and dependencies using Agile/Scrum methodologies will ensure successful project delivery. Facilitating communication between stakeholders to ensure alignment on Data Engineering, Data Analytics, and Power BI initiatives will be a key aspect of your role. Your responsibilities will also include working with DevOps and engineering teams to streamline CI/CD pipelines and deployment processes. Supporting metadata management and data mesh concepts to ensure an efficient data ecosystem will be essential. Working closely with Data Engineers, BI Analysts, and Business Teams to define project scope, objectives, and success criteria will contribute to overall project success. Ensuring data governance, security, and compliance best practices are followed will be critical in maintaining data integrity and security. Key responsibilities will include overseeing the full lifecycle of data and analytics projects to ensure scope, quality, and timelines are met. Acting as the primary liaison with customers, architects, and internal teams to align on execution strategies will be crucial. Managing ETL pipelines, data warehousing, visualization tools (Tableau, Power BI), and cloud-based big data solutions will be part of your oversight. Identifying potential risks, scope changes, and mitigation strategies to ensure smooth project execution will be important. Guiding workstream leads, supporting PMO updates, and maintaining transparent communication with all stakeholders will be key in ensuring project success. Driving innovation and process enhancements in data engineering, BI, and analytics workflows will be essential for continuous improvement. To excel in this role, you should have at least 8 years of experience leading data and analytics projects. Strong expertise in data integration tools, ETL processes, and big data technologies is required. Hands-on experience with cloud platforms and visualization tools will be beneficial. Proven ability to mentor teams, manage stakeholders, and drive project success is crucial. Excellent communication skills, with the ability to engage both business and IT executives, are necessary. Possessing certifications such as PMP, Agile, or Data Analytics will be advantageous.,

Posted 5 days ago

Apply

5.0 - 10.0 years

20 - 25 Lacs

Bengaluru

Hybrid

Job title: Senior Software Engineer Experience: 5- 8 years Primary skills: Python, Spark or Pyspark, DWH ETL. Database: SparkSQL or PostgreSQL Secondary skills: Databricks ( Delta Lake, Delta tables, Unity Catalog) Work Model: Hybrid (Weekly Twice) Cab Facility: Yes Work Timings: 10am to 7pm Interview Process: 3 rounds (3rd round F2F Mandatory) Work Location: Karle Town Tech Park Nagawara, Hebbal Bengaluru 560045 About Business Unit: The Architecture Team plays a pivotal role in the end-to-end design, governance, and strategic direction of product development within Epsilon People Cloud (EPC). As a centre of technical excellence, the team ensures that every product feature is engineered to meet the highest standards of scalability, security, performance, and maintainability. Their responsibilities span across architectural ownership of critical product features, driving techno-product leadership, enforcing architectural governance, and ensuring systems are built with scalability, security, and compliance in mind. They design multi cloud and hybrid cloud solutions that support seamless integration across diverse environments and contribute significantly to interoperability between EPC products and the broader enterprise ecosystem. The team fosters innovation and technical leadership while actively collaborating with key partners to align technology decisions with business goals. Through this, the Architecture Team ensures the delivery of future-ready, enterprise-grade, efficient and performant, secure and resilient platforms that form the backbone of Epsilon People Cloud. Why we are looking for you: You have experience working as a Data Engineer with strong database fundamentals and ETL background. You have experience working in a Data warehouse environment and dealing with data volume in terabytes and above. You have experience working in relation data systems, preferably PostgreSQL and SparkSQL. You have excellent designing and coding skills and can mentor a junior engineer in the team. You have excellent written and verbal communication skills. You are experienced and comfortable working with global clients You work well with teams and are able to work with multiple collaborators including clients, vendors and delivery teams. You are proficient with bug tracking and test management toolsets to support development processes such as CI/CD. What you will enjoy in this role: As part of the Epsilon Technology practice, the pace of the work matches the fast-evolving demands in the industry. You will get to work on the latest tools and technology and deal with data of petabyte-scale. Work on homegrown frameworks on Spark and Airflow etc. Exposure to Digital Marketing Domain where Epsilon is a marker leader. Understand and work closely with consumer data across different segments that will eventually provide insights into consumer behaviour's and patterns to design digital Ad strategies. As part of the dynamic team, you will have opportunities to innovate and put your recommendations forward. Using existing standard methodologies and defining as per evolving industry standards. Opportunity to work with Business, System and Delivery to build a solid foundation on Digital Marketing Domain. The open and transparent environment that values innovation and efficiency Click here to view how Epsilon transforms marketing with 1 View, 1 Vision and 1 Voice. What will you do? Develop a deep understanding of the business context under which your team operates and present feature recommendations in an agile working environment. Lead, design and code solutions on and off database for ensuring application access to enable data-driven decision making for the company's multi-faceted ad serving operations. Working closely with Engineering resources across the globe to ensure enterprise data warehouse solutions and assets are actionable, accessible and evolving in lockstep with the needs of the ever-changing business model. This role requires deep expertise in spark and strong proficiency in ETL, SQL, and modern data engineering practices. Design, develop, and manage ETL/ELT pipelines in Databricks using PySpark/SparkSQL, integrating various data sources to support business operations Lead in the areas of solution design, code development, quality assurance, data modelling, business intelligence. Mentor Junior engineers in the team. Stay abreast of developments in the data world in terms of governance, quality and performance optimization. Able to have effective client meetings, understand deliverables, and drive successful outcomes. Qualifications: Bachelor's Degree in Computer Science or equivalent degree is required. 5 - 8 years of data engineering experience with expertise using Apache Spark and Databases (preferably Databricks) in marketing technologies and data management, and technical understanding in these areas. Monitor and tune Databricks workloads to ensure high performance and scalability, adapting to business needs as required. Solid experience in Basic and Advanced SQL writing and tuning. Experience with Python Solid understanding of CI/CD practices with experience in Git for version control and integration for spark data projects. Good understanding of Disaster Recovery and Business Continuity solutions Experience with scheduling applications with complex interdependencies, preferably Airflow Good experience in working with geographically and culturally diverse teams. Understanding of data management concepts in both traditional relational databases and big data lakehouse solutions such as Apache Hive, AWS Glue or Databricks. Excellent written and verbal communication skills. Ability to handle complex products. Good communication and problem-solving skills, with the ability to manage multiple priorities. Ability to diagnose and solve problems quickly. Diligent, able to multi-task, prioritize and able to quickly change priorities. Good time management. Good to have knowledge of cloud platforms (cloud security) and familiarity with Terraform or other infrastructure-as-code tools. About Epsilon: Epsilon is a global data, technology and services company that powers the marketing and advertising ecosystem. For decades, we have provided marketers from the world's leading brands the data, technology and services they need to engage consumers with 1 View, 1 Vision and 1 Voice. 1 View of their universe of potential buyers. 1 Vision for engaging each individual. And 1 Voice to harmonize engagement across paid, owned and earned channels. Epsilon's comprehensive portfolio of capabilities across our suite of digital media, messaging and loyalty solutions bridge the divide between marketing and advertising technology. We process 400+ billion consumer actions each day using advanced AI and hold many patents of proprietary technology, including real-time modeling languages and consumer privacy advancements. Thanks to the work of every employee, Epsilon has been consistently recognized as industry-leading by Forrester, Adweek and the MRC. Epsilon is a global company with more than 9,000 employees around the world.

Posted 5 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies