Home
Jobs

1885 Data Engineering Jobs - Page 32

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 7.0 years

40 - 45 Lacs

Chandigarh, Bengaluru

Work from Office

Naukri logo

As the Data Engineer, you will play a pivotal role in shaping our data infrastructure and executing against our strategy. You will ideate alongside engineering, data and our clients to deploy data products with an innovative and meaningful impact to clients. You will design, build, and maintain scalable data pipelines and workflows on AWS. Additionally, your expertise in AI and machine learning will enhance our ability to deliver smarter, more predictive solutions. Key Responsibilities Collaborate with other engineers, customers to brainstorm and develop impactful data products tailored to our clients. Leverage AI and machine learning techniques to integrate intelligent features into our offerings. Develop, and optimize end-to-end data pipelines on AWS Follow best practices in software architecture and development. Implement effective cost management and performance optimization strategies. Develop and maintain systems using Python, SQL, PySpark, and Django for front-end development. Work directly with clients and end-users and address their data needs Utilize databases and tools including and not limited to, Postgres, Redshift, Airflow, and MongoDB to support our data ecosystem. Leverage AI frameworks and libraries to integrate advanced analytics into our solutions. Qualifications Experience: Minimum of 3 years of experience in data engineering, software development, or related roles. Proven track record in designing and deploying AWS cloud infrastructure solutions At least 2 years in data analysis and mining techniques to aid in descriptive and diagnostic insights Extensive hands-on experience with Postgres, Redshift, Airflow, MongoDB, and real-time data workflows. Technical Skills: Expertise in Python, SQL, and PySpark Strong background in software architecture and scalable development practices. Tableau, Metabase or similar viz tools experience Working knowledge of AI frameworks and libraries is a plus. Leadership & Communication: Demonstrates ownership and accountability for delivery with a strong commitment to quality. Excellent communication skills with a history of effective client and end-user engagement. Startup & Fintech Mindset: Adaptability and agility to thrive in a fast-paced, early-stage startup environment. Passion for fintech innovation and a strong desire to make a meaningful impact on the future of finance.

Posted 2 weeks ago

Apply

10.0 - 15.0 years

12 - 17 Lacs

Mumbai, Maharastra

Work from Office

Naukri logo

About the Role: Grade Level (for internal use): 11 The Team You will be an expert contributor and part of the Rating Organizations Data Services Product Engineering Team. This team, who has a broad and expert knowledge on Ratings organizations critical data domains, technology stacks and architectural patterns, fosters knowledge sharing and collaboration that results in a unified strategy. All Data Services team members provide leadership, innovation, timely delivery, and the ability to articulate business value. Be a part of a unique opportunity to build and evolve S&P Ratings next gen analytics platform. Responsibilities: Architect, design, and implement innovative software solutions to enhance S&P Ratings' cloud-based analytics platform. Mentor a team of engineers (as required), fostering a culture of trust, continuous growth, and collaborative problem-solving. Collaborate with business partners to understand requirements, ensuring technical solutions align with business goals. Manage and improve existing software solutions, ensuring high performance and scalability. Participate actively in all Agile scrum ceremonies, contributing to the continuous improvement of team processes. Produce comprehensive technical design documents and conduct technical walkthroughs. Experience & Qualifications: Bachelors degree in computer science, Information Systems, Engineering, equivalent or more is required Proficient with software development lifecycle (SDLC) methodologies like Agile, Test-driven development 10+ years of experience with 4+ years designing/developing enterprise products, modern tech stacks and data platforms 4+ years of hands-on experience contributing to application architecture & designs, proven software/enterprise integration design patterns and full-stack knowledge including modern distributed front end and back-end technology stacks 5+ years full stack development experience in modern web development technologies, Java/J2EE, UI frameworks like Angular, React, SQL, Oracle, NoSQL Databases like MongoDB Experience designing transactional/data warehouse/data lake and data integrations with Big data eco system leveraging AWS cloud technologies Thorough understanding of distributed computing Passionate, smart, and articulate developer Quality first mindset with a strong background and experience with developing products for a global audience at scale Excellent analytical thinking, interpersonal, oral and written communication skills with strong ability to influence both IT and business partners Superior knowledge of system architecture, object-oriented design, and design patterns. Good work ethic, self-starter, and results-oriented Excellent communication skills are essential, with strong verbal and writing proficiencies Exp. with Delta Lake systems like Databricks using AWS cloud technologies and PySpark is a plus Additional Preferred Qualifications: Experience working AWS Experience with SAFe Agile Framework Bachelor's/PG degree in Computer Science, Information Systems or equivalent. Hands-on experience contributing to application architecture & designs, proven software/enterprise integration design principles Ability to prioritize and manage work to critical project timelines in a fast-paced environment Excellent Analytical and communication skills are essential, with strong verbal and writing proficiencies Ability to train and mentor

Posted 2 weeks ago

Apply

5.0 - 10.0 years

15 - 22 Lacs

Gurugram

Work from Office

Naukri logo

Data Engineer - SSIS - 5+ Years - Gurugram (Hybrid) Are you a skilled Data Engineer with expertise in SSIS and 5+ years of experience? Do you have a passion for analytics and want to work in a hybrid setup in Gurugram? Our client is seeking a talented individual to join their team and contribute to their data engineering projects. Location : Gurugram (Hybrid) Your Future EmployerOur client is a leading organization in the analytics domain, known for fostering an inclusive and diverse work environment. They are committed to providing their employees with opportunities for growth and development. Responsibilities Design, develop, and maintain data pipelines using SSIS for efficient data processing Collaborate with cross-functional teams to understand data requirements and provide effective data solutions Optimize data pipelines for performance and scalability Ensure data quality and integrity throughout the data engineering process Requirements 5+ years of experience in data engineering with a strong focus on SSIS Proficiency in data warehousing concepts and ETL processes Hands-on experience with SQL databases and data modeling4.Strong analytical and problem-solving skills Bachelor's degree in Computer Science, Engineering, or related field What's in it for you : In this role, you will have the opportunity to work on challenging projects and enhance your expertise in data engineering. The organization offers a competitive compensation package and a supportive work environment where your contributions are valued. Reach us : If you feel this opportunity is well aligned with your career progression plans, please feel free to reach me with your updated profile at rohit.kumar@crescendogroup.in Disclaimer : Crescendo Global specializes in Senior to C-level niche recruitment. We are passionate about empowering job seekers and employers with an engaging memorable job search and leadership hiring experience. Crescendo Global does not discriminate on the basis of race, religion, color, origin, gender, sexual orientation, age, marital status, veteran status or disability status. Note : We receive a lot of applications on a daily basis so it becomes a bit difficult for us to get back to each candidate. Please assume that your profile has not been shortlisted in case you don't hear back from us in 1 week. Your patience is highly appreciated. Scammers can misuse Crescendo Globals name for fake job offers. We never ask for money, purchases, or system upgrades. Verify all opportunities at www.crescendo-global.com and report fraud immediately. Stay alert! Profile keywords : Data Engineer, SSIS, Data Warehousing, ETL, SQL, Analytics

Posted 2 weeks ago

Apply

7.0 - 10.0 years

8 - 14 Lacs

Hyderabad

Hybrid

Naukri logo

Responsibilities of the Candidate : - Be responsible for the design and development of big data solutions. Partner with domain experts, product managers, analysts, and data scientists to develop Big Data pipelines in Hadoop - Be responsible for moving all legacy workloads to a cloud platform - Work with data scientists to build Client pipelines using heterogeneous sources and provide engineering services for data PySpark science applications - Ensure automation through CI/CD across platforms both in cloud and on-premises - Define needs around maintainability, testability, performance, security, quality, and usability for the data platform - Drive implementation, consistent patterns, reusable components, and coding standards for data engineering processes - Convert SAS-based pipelines into languages like PySpark, and Scala to execute on Hadoop and non-Hadoop ecosystems - Tune Big data applications on Hadoop and non-Hadoop platforms for optimal performance - Apply an in-depth understanding of how data analytics collectively integrate within the sub-function as well as coordinate and contribute to the objectives of the entire function. - Produce a detailed analysis of issues where the best course of action is not evident from the information available, but actions must be recommended/taken. - Assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients, and assets, by driving compliance with applicable laws, rules, and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct, and business practices, and escalating, managing and reporting control issues with transparency Requirements : - 6+ years of total IT experience - 3+ years of experience with Hadoop (Cloudera)/big data technologies - Knowledge of the Hadoop ecosystem and Big Data technologies Hands-on experience with the Hadoop eco-system (HDFS, MapReduce, Hive, Pig, Impala, Spark, Kafka, Kudu, Solr) - Experience in designing and developing Data Pipelines for Data Ingestion or Transformation using Java Scala or Python. - Experience with Spark programming (Pyspark, Scala, or Java) - Hands-on experience with Python/Pyspark/Scala and basic libraries for machine learning is required. - Proficient in programming in Java or Python with prior Apache Beam/Spark experience a plus. - Hand on experience in CI/CD, Scheduling and Scripting - Ensure automation through CI/CD across platforms both in cloud and on-premises - System level understanding - Data structures, algorithms, distributed storage & compute - Can-do attitude on solving complex business problems, good interpersonal and teamwork skills

Posted 2 weeks ago

Apply

8.0 - 12.0 years

6 - 14 Lacs

Mumbai, Hyderabad, Pune

Work from Office

Naukri logo

Job Description: 5+ years in data engineering with at least 2 years on Azure Synapse. Strong SQL, Spark, and Data Lake integration experience. Familiarity with Azure Data Factory, Power BI, and DevOps pipelines. Experience in AMS or managed services environments is a plus. Detailed JD Design, develop, and maintain data pipelines using Azure Synapse Analytics. Collaborate with customer to ensure SLA adherence and incident resolution. Optimize Synapse SQL pools for performance and cost. Implement data security, access control, and compliance measures. Participate in calibration and transition phases with client stakeholders

Posted 2 weeks ago

Apply

13.0 - 20.0 years

25 - 40 Lacs

Chennai, Bengaluru

Work from Office

Naukri logo

Position Overview We are looking for a highly experienced and versatile Solution Architect Data to lead the solution design and delivery of next-generation data solutions for our BFS clients. The ideal candidate will have a strong background in data architecture and engineering, deep domain expertise in financial services, and hands-on experience with cloud-native data platforms and modern data analytics tools. The role will require architecting solutions across Retail, Corporate, Wealth, and Capital Markets, as well as Payments, Lending, and Onboarding journeys. Possession of Data Analytics and Exposure to Data regulatory domain will be of distinct advantage. Hands on experience of AI & Gen AI enabling data related solution will be a distinct advantage for the position. Key Responsibilities Design and implement end-to-end data solutions for BFS clients, covering data engineering and analytics involving modern data stacks and concepts. Architect cloud-native data platforms using AWS, Azure, and GCP (certifications preferred). Build and maintain data models aligned with Open Banking, Open Finance, SCA, AISP, and PISP requirements. Enrich Solution design by incorporating the construct of industry-standard data architectures using frameworks such as BIAN and lead data standardization programs for banks. Enrich solution architecture by enabling AI and Gen AI paradigm for data engineering, analytics and data regulatory Deliver data solutions in domains like Core Banking, Payments, Lending, Customer Onboarding, Wealth, and Capital Markets. Collaborate with business and technology stakeholders to gather requirements and translate them into scalable data architectures. Solution Design and if needed hands-on in developing lab-class Proof-of-Concepts (POCs) showcasing data-driven capabilities. Lead and contribute to RFX responses for banking and financial services clients and regulatory bodies across UK, EMEA regions. Provide architectural leadership in data initiatives related to regulatory compliance and risk analytics. In this regard, familiarity and working experience of with regulatory software and platform such as SAS, Nice Actimize, and Wolters Kluwer will be preferred. Required Skills & Experience 1218 years of experience in IT with a focus on data solution architecture in BFS domain. Strong delivery and development experience in Retail, Corporate, Wealth, and Capital Market banking domains. Deep understanding of data standards such as BIAN and experience implementing them in banking projects. Expertise in cloud platforms (AWS, Azure, GCP) and leveraging native services for data processing, storage, and analytics. Strong experience in building data models and data solutions for Open Banking, Open Finance, and regulatory needs including SCA, AISP, and PISP. Proficiency in data engineering pipelines and real-time/batch data processing. Experience in designing enterprise data lakes, data warehouses, and implementing data mesh and data lineage frameworks. Hands-on experience in developing rapid POCs and accelerators. Primary Technical Skills Cloud Platforms: AWS, Azure, GCP (certified preferred) Big Data Technologies: Hadoop, Spark, Databricks, Delta Lake Programming Languages: Python, Scala, SQL Data Engineering & Pipelines: Apache Airflow, Kafka, Glue, Data Factory Data Warehousing: Snowflake, Redshift, BigQuery, Synapse Visualization: Power BI, Tableau, Looker Data Governance: Data Lineage, Data Cataloging, Master Data Management Architecture Concepts: Data Mesh, Data Fabric, Event-driven architecture

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 - 0 Lacs

Hyderabad

Work from Office

Naukri logo

Experience Required: 3+ years Technical knowledge: AWS, Python, SQL, S3, EC2, Glue, Athena, Lambda, DynamoDB, RedShift, Step Functions, Cloud Formation, CI/CD Pipelines, Github, EMR, RDS,AWS Lake Formation, GitLab, Jenkins and AWS CodePipeline. Role Summary: As a Senior Data Engineer,with over 3 years of expertise in Python, PySpark, SQL to design, develop and optimize complex data pipelines, support data modeling, and contribute to the architecture that supports big data processing and analytics to cutting-edge cloud solutions that drive business growth. You will lead the design and implementation of scalable, high-performance data solutions on AWS and mentor junior team members.This role demands a deep understanding of AWS services, big data tools, and complex architectures to support large-scale data processing and advanced analytics. Key Responsibilities: Design and develop robust, scalable data pipelines using AWS services, Python, PySpark, and SQL that integrate seamlessly with the broader data and product ecosystem. Lead the migration of legacy data warehouses and data marts to AWS cloud-based data lake and data warehouse solutions. Optimize data processing and storage for performance and cost. Implement data security and compliance best practices, in collaboration with the IT security team. Build flexible and scalable systems to handle the growing demands of real-time analytics and big data processing. Work closely with data scientists and analysts to support their data needs and assist in building complex queries and data analysis pipelines. Collaborate with cross-functional teams to understand their data needs and translate them into technical requirements. Continuously evaluate new technologies and AWS services to enhance data capabilities and performance. Create and maintain comprehensive documentation of data pipelines, architectures, and workflows. Participate in code reviews and ensure that all solutions are aligned to pre-defined architectural specifications. Present findings to executive leadership and recommend data-driven strategies for business growth. Communicate effectively with different levels of management to gather use cases/requirements and provide designs that cater to those stakeholders. Handle clients in multiple industries at the same time, balancing their unique needs. Provide mentoring and guidance to junior data engineers and team members. Requirements: 3+ years of experience in a data engineering role with a strong focus on AWS, Python, PySpark, Hive, and SQL. Proven experience in designing and delivering large-scale data warehousing and data processing solutions. Lead the design and implementation of complex, scalable data pipelines using AWS services such as S3, EC2, EMR, RDS, Redshift, Glue, Lambda, Athena, and AWS Lake Formation. Bachelor's or Masters degree in Computer Science, Engineering, or a related technical field. Deep knowledge of big data technologies and ETL tools, such as Apache Spark, PySpark, Hadoop, Kafka, and Spark Streaming. Implement data architecture patterns, including event-driven pipelines, Lambda architectures, and data lakes. Incorporate modern tools like Databricks, Airflow, and Terraform for orchestration and infrastructure as code. Implement CI/CD using GitLab, Jenkins, and AWS CodePipeline. Ensure data security, governance, and compliance by leveraging tools such as IAM, KMS, and AWS CloudTrail. Mentor junior engineers, fostering a culture of continuous learning and improvement. Excellent problem-solving and analytical skills, with a strategic mindset. Strong communication and leadership skills, with the ability to influence stakeholders at all levels. Ability to work independently as well as part of a team in a fast-paced environment. Advanced data visualization skills and the ability to present complex data in a clear and concise manner. Excellent communication skills, both written and verbal, to collaborate effectively across teams and levels. Preferred Skills: Experience with Databricks, Snowflake, and machine learning pipelines. Exposure to real-time data streaming technologies and architectures. Familiarity with containerization and serverless computing (Docker, Kubernetes, AWS Lambda).

Posted 2 weeks ago

Apply

4.0 - 8.0 years

12 - 18 Lacs

Hyderabad, Chennai, Coimbatore

Hybrid

Naukri logo

We are seeking a skilled and motivated Data Engineer to join our dynamic team. The ideal candidate will have experience in designing, developing, and maintaining scalable data pipelines and architectures using Hadoop, PySpark, ETL processes , and Cloud technologies . Responsibilities: Design, develop, and maintain data pipelines for processing large-scale datasets. Build efficient ETL workflows to transform and integrate data from multiple sources. Develop and optimize Hadoop and PySpark applications for data processing. Ensure data quality, governance, and security standards are met across systems. Implement and manage Cloud-based data solutions (AWS, Azure, or GCP). Collaborate with data scientists and analysts to support business intelligence initiatives. Troubleshoot performance issues and optimize query executions in big data environments. Stay updated with industry trends and advancements in big data and cloud technologies . Required Skills: Strong programming skills in Python, Scala, or Java . Hands-on experience with Hadoop ecosystem (HDFS, Hive, Spark, etc.). Expertise in PySpark for distributed data processing. Proficiency in ETL tools and workflows (SSIS, Apache Nifi, or custom pipelines). Experience with Cloud platforms (AWS, Azure, GCP) and their data-related services. Knowledge of SQL and NoSQL databases. Familiarity with data warehousing concepts and data modeling techniques. Strong analytical and problem-solving skills. Interested can reach us at +91 7305206696/ saranyadevib@talentien.com

Posted 2 weeks ago

Apply

4.0 - 9.0 years

6 - 10 Lacs

Hyderabad

Work from Office

Naukri logo

Overview The Senior Data Analyst is responsible for serving as a subject matter expert who can lead efforts to analyze data with the goal of delivering insights that will influence our products and customers. This position will report into the Data Analytics Manager , and will work closely with members of our product and marketing team s , data engineers, and members of our Customer Success organization supporting client outreach efforts. The chief function s of this role will be finding and sharing data-driven insights to deliver value to le ss technical a udiences , and instilling best practices for analytics in the rest of the team . Responsibilities Perform various data analysis functions to a nalyzedatafrom a variety of sources including external labor marketdataand research and internaldatasets fromourplatforms Incorporate information from a variety of systems to produce comprehensive and compelling narrative s for thought-leadership initiatives and customer engagements Demonstrate critical thinking - identify the story in context using multipledatasets, and present results . A strong proficiency in data storytelling will be critical to success in this role. Understand principles of quality data visualization and apply them in Tableau to create and maintain custom dashboards for consumption by other employees Find and i nvestigatedataquality issues, root causes andrecommendremedies to be implemented by thedatascientists and engineers Lia i se with teams around our business to understand their problems , determine how our team can help, then use our database to produce the content they need Identify datamappingand enrichmentrequirements . Familiarity with SQL, especially the logic behind different types of data joins and writing efficient queries, will be necessary Consistently ensure that business is always conducted with integrity and that behavior aligns with iCIMS policies, procedures, and core competencies Additional Job Responsibilities: Produce and adaptdatavisualizations in response to business requests for internaland externaluse Shows good judgement in prioritizing their own commitments and those of the larger team , while demonstrating initiative and appropriate urgency when needed Mentor junior team members in best practices for analytics, data visualization, and data storytelling . Exemplify these standards and guide teammates in following them. Think creatively to produce unique, actionable insights from complex datasets, which can deliver value to our business and to our customers. Qualifications 5-10 years professional experienceworking in an analytics capacity Excellent communication skills , especially with regards to data storytelling – finding insights from complex datasets and sharing those findings with key stakeholders Strongdataanalytics and visualization skills Expertise in Tableau Desktop (Tableau Server and Prep are preferable) producing clear and informative graphs and dashboards Proficiency in SQL and either Python or R to extract and prepare data for analysis Advanced knowledge of Excel (Pivot tables, VLOOKUPs, IF statements) Familiarity with data guardrails to ensure compliance with applicable data governance regulations and privacy laws (i.e., GDPR)

Posted 2 weeks ago

Apply

3.0 - 8.0 years

0 - 0 Lacs

Chennai

Hybrid

Naukri logo

You Lead the Way. We've Got Your Back. With the right backing, people and businesses have the power to progress in incredible ways. When you join Team Amex, you become part of a global and diverse community of colleagues with an unwavering commitment to back our customers, communities and each other. Here, youll learn and grow as we help you create a career journey thats unique and meaningful to you with benefits, programs, and flexibility that support you personally and professionally. At American Express, you’ll be recognized for your contributions, leadership, and impact—every colleague has the opportunity to share in the company’s success. Together, we’ll win as a team, striving to uphold our company values and powerful backing promise to provide the world’s best customer experience every day. And we’ll do it with the utmost integrity, and in an environment where everyone is seen, heard and feels like they belong. As part of our diverse tech team, you can architect, code and ship software that makes us an essential part of our customers’ digital lives. Here, you can work alongside talented engineers in an open, supportive, inclusive environment where your voice is valued, and you make your own decisions on what tech to use to solve challenging problems. Amex offers a range of opportunities to work with the latest technologies and encourages you to back the broader engineering community through open source. And because we understand the importance of keeping your skills fresh and relevant, we give you dedicated time to invest in your professional development. Find your place in technology on #TeamAmex. How will you make an impact in this role? Build NextGen Data Strategy, Data Virtualization, Data Lakes Warehousing Transform and improve performance of existing reporting & analytics use cases with more efficient and state of the art data engineering solutions. Analytics Development to realize advanced analytics vision and strategy in a scalable, iterative manner. Deliver software that provides superior user experiences, linking customer needs and business drivers together through innovative product engineering. Cultivate an environment of Engineering excellence and continuous improvement, leading changes that drive efficiencies into existing Engineering and delivery processes. Own accountability for all quality aspects and metrics of product portfolio, including system performance, platform availability, operational efficiency, risk management, information security, data management and cost effectiveness. Work with key stakeholders to drive Software solutions that align to strategic roadmaps, prioritized initiatives and strategic Technology directions. Work with peers, staff engineers and staff architects to assimilate new technology and delivery methods into scalable software solutions. Minimum Qualifications: Bachelor’s degree in computer science, Computer Science Engineering, or related field required; Advanced Degree preferred. 3- 12 years of hands-on experience in implementing large data-warehousing projects, strong knowledge of latest NextGen BI & Data Strategy & BI Tools Proven experience in Business Intelligence, Reporting on large datasets, Data Virtualization Tools, Big Data, GCP, JAVA, Microservices Strong systems integration architecture skills and a high degree of technical expertise, ranging across a number of technologies with a proven track record of turning new technologies into business solutions. Should be good in one programming language python/Java. Should have good understanding of data structures. GCP /cloud knowledge has added advantage. PowerBI, Tableau and looker good knowledge and understanding. Outstanding influential and collaboration skills; ability to drive consensus and tangible outcomes, demonstrated by breaking down silos and fostering cross communication process. Experience managing in a fast paced, complex, and dynamic global environment. Outstanding influential and collaboration skills; ability to drive consensus and tangible outcomes, demonstrated by breaking down silos and fostering cross communication process. Preferred Qualifications: Bachelor’s degree in computer science, Computer Science Engineering, or related field required; Advanced Degree preferred. 5+ years of hands-on experience in implementing large data-warehousing projects, strong knowledge of latest NextGen BI & Data Strategy & BI Tools Proven experience in Business Intelligence, Reporting on large datasets, Oracle Business Intelligence (OBIEE), Tableau, MicroStrategy, Data Virtualization Tools, Oracle PL/SQL, Informatica, Other ETL Tools like Talend, Java Should be good in one programming language python/Java. Should be good data structures and reasoning. GCP knowledge has added advantage or cloud knowledge. PowerBI, Tableau and looker good knowledge and understanding. Strong systems integration architecture skills and a high degree of technical expertise, ranging across several technologies with a proven track record of turning new technologies into business solutions. Outstanding influential and collaboration skills; ability to drive consensus and tangible outcomes, demonstrated by breaking down silos and fostering cross communication process. Compliance Language We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations.

Posted 2 weeks ago

Apply

12.0 - 15.0 years

35 - 50 Lacs

Hyderabad

Work from Office

Naukri logo

Skill : Java, Spark, Kafka Experience : 10 to 16 years Location : Hyderabad As Data Engineer, you will : Support in designing and rolling out the data architecture and infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources Identify data source, design and implement data schema/models and integrate data that meet the requirements of the business stakeholders Play an active role in the end-to-end delivery of AI solutions, from ideation, feasibility assessment, to data preparation and industrialization. Work with business, IT and data stakeholders to support with data-related technical issues, their data infrastructure needs as well as to build the most flexible and scalable data platform. With a strong focus on DataOps, design, develop and deploy scalable batch and/or real-time data pipelines. Design, document, test and deploy ETL/ELT processes Find the right tradeoffs between the performance, reliability, scalability, and cost of the data pipelines you implement Monitor data processing efficiency and propose solutions for improvements. • Have the discipline to create and maintain comprehensive project documentation. • Build and share knowledge with colleagues and coach junior profiles.

Posted 2 weeks ago

Apply

4.0 - 6.0 years

6 - 8 Lacs

Bengaluru

Work from Office

Naukri logo

Design and implement cloud-native data architectures on AWS, including data lakes, data warehouses, and streaming pipelines using services like S3, Glue, Redshift, Athena, EMR, Lake Formation, and Kinesis. Develop and orchestrate ETL/ELT pipelines Required Candidate profile Participate in pre-sales and consulting activities such as: Engaging with clients to gather requirements and propose AWS-based data engineering solutions. Supporting RFPs/RFIs, technical proposals

Posted 2 weeks ago

Apply

5.0 - 7.0 years

8 - 14 Lacs

Chennai

Work from Office

Naukri logo

Must have skills : - Minimum of 5-7 years of experience in software development, with a focus on Java and infrastructure tools. - Min 6+ years of experience as a Data Engineer. - Good Experience in handling Big Data Spark, Hive SQL, BigQuery, SQL. - Candidate worked on cloud platforms and GCP would be an added advantage. - Good understanding of Hadoop based ecosystem including hard sequel, HDFS would be very essential. - Very good professional knowledge of PySpark or using Scala Responsibilities : - Collaborate with cross-functional teams such as Data Scientists, Product Partners and Partner Team Developers to identify opportunities for Big Data, Query ( Spark, Hive SQL, BigQuery, SQL ) tuning opportunities that can be solved using machine learning and generative AI. - Write clean, high-performance, high-quality, maintainable code. - Design and develop Big Data Engineering Solutions Applications for above ensuring scalability, efficiency, and maintainability of such solutions. Requirements : - A Bachelor or Master's degree in Computer Science or a related field. - Proven experience working as a Big Data & MLOps Engineer, with a focus on Spark, Scala Spark or PySpark, Spark SQL, BigQuery, Python, Google Cloud,. - Deep understanding and experience in tuning Dataproc, BigQuery, Spark Applications. - Solid knowledge of software engineering best practices, including version control systems (e.g Git), code reviews, and testing methodologies. - Strong communication skills to effectively collaborate and present findings to both technical and non-technical stakeholders. - Proven ability to adapt and learn new technologies and frameworks quickly. - A proactive mindset with a passion for continuous learning and research.

Posted 2 weeks ago

Apply

4.0 - 6.0 years

8 - 14 Lacs

Chennai

Work from Office

Naukri logo

Must have skills : - Bachelor's/master's in engineering, Computer Science, or equivalent experience - 5-6 years of experience in the IT industry, experience in Data space is preferred. - Working experience in GCP-BQ. - Good knowledge in Teradata or Oracle. - Experience in Data modelling. - Advanced scripting experience - Python, Shell, etc. - Strong analytical skills including the ability to define problems, collect data, establish facts, and draw valid conclusions - Knowledge of Scheduling Tools (preferably Airflow, UC4) is a plus - Working knowledge on any ETL tool (i.e, Informatica) is a plus. - Excellent written and oral communication skills - Familiarity with data movement techniques and best practices to handle large volumes of data - Strong communication skills and willingness to take initiative to contribute beyond core-responsibilities Responsibilities : In this role, the individual will be part of the Credit data engineering team within Credit Platform organization and have the following responsibilities : - Design and implement an integrated credit data platform that is extremely high-volume, fault-tolerant, scalable backend systems that process and manage petabytes of customer data. - Should adopt long term/strategic thought process during the entire project life cycle. - Participating and collaborating with cross functional teams in the organization to understand the business requirements and to deliver solutions that can scale.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

PositionSenior Data Engineer - Airflow, PLSQL Experience5+ Years LocationBangalore/Hyderabad/Pune Seeking a Senior Data Engineer with strong expertise in Apache Airflow and Oracle PL/SQL, along with working experience in Snowflake and Agile methodologies. The ideal candidate will also take up Scrum Master responsibilities and lead a data engineering scrum team to deliver robust, scalable data solutions. Key Responsibilities: Design, develop, and maintain scalable data pipelines using Apache Airflow. Write and optimize complex PL/SQL queries, procedures, and packages on Oracle databases. Collaborate with cross-functional teams to design efficient data models and integration workflows. Work with Snowflake for data warehousing and analytics use cases. Own the delivery of sprint goals, backlog grooming, and facilitation of agile ceremonies as the Scrum Master. Monitor pipeline health and troubleshoot production data issues proactively. Ensure code quality, documentation, and best practices across the team. Mentor junior data engineers and promote a culture of continuous improvement. Required Skills and Qualifications: 5+ years of experience as a Data Engineer in enterprise environments. Strong expertise in Apache Airflow for orchestrating workflows. Expert in Oracle PL/SQL - stored procedures, performance tuning, debugging. Hands-on experience with Snowflake - data modeling, SQL, optimization. Working knowledge of version control (Git) and CI/CD practices. Prior experience or certification as a Scrum Master is highly desirable. Strong analytical and problem-solving skills with attention to detail. Excellent communication and leadership skills.

Posted 2 weeks ago

Apply

5.0 - 8.0 years

15 - 27 Lacs

Bengaluru

Hybrid

Naukri logo

We are looking for a highly skilled API & Pixel Tracking Integration Engineer to lead the development and deployment of server-side tracking and attribution solutions across multiple platforms. The ideal candidate brings deep expertise in CAPI integrations (Meta, Google, and other platforms), secure data handling using cryptographic techniques, and experience working within privacy-first environments like Azure Clean Rooms . This role requires strong hands-on experience in C# development, Azure cloud services, OCI (Oracle Cloud Infrastructure) , and marketing technology stacks including Adobe Tag Management and Pixel Management . You will work closely with engineering, analytics, and marketing teams to deliver scalable, compliant, and secure data tracking solutions that drive business insights and performance. Key Responsibilities: Design, implement, and maintain CAPI integrations across Meta, Google, and all major platforms , ensuring real-time and accurate server-side event tracking. Utilize Fabric and OCI environments as needed for data integration and marketing intelligence workflows. Develop and manage custom tracking solutions leveraging Azure Clean Rooms , ensuring user NFAs are respected and privacy-compliant logic is implemented. Implement cryptographic hashing (e.g., SHA-256) Use Azure Data Lake Gen1 & Gen2 (ADLS) , Cosmos DB , and Azure Functions to build and host scalable backend systems. Integrate with Azure Key Vaults to securely manage secrets and sensitive credentials. Design and execute data pipelines in Azure Data Factory (ADF) for processing and transforming tracking data. Lead pixel and tag management initiatives using Adobe Tag Manager , including pixel governance and QA across properties. Collaborate with security teams to ensure all data-sharing and processing complies with Azures data security standards and enterprise privacy frameworks. Monitor, troubleshoot, and optimize existing integrations using logs, diagnostics, and analytics tools. Required Skills: Strong hands-on experience with Fabric and building scalable APIs. Experience in implementing Meta CAPI , Google Enhanced Conversions , and other platform-specific server-side tracking APIs. Knowledge of Azure Clean Rooms , with experience developing custom logic and code for clean data collaborations . Proficiency with Azure Cloud technologies , especially Cosmos DB, Azure Functions, ADF, Key Vault, ADLS , and Azure security best practices . Familiarity with OCI for hybrid-cloud integration scenarios. Understanding of cryptography and secure data handling (e.g., hashing email addresses with SHA-256). Experience with Adobe Tag Management , specifically in pixel governance and lifecycle. Proven ability to collaborate across functions, especially with marketing and analytics teams. Soft Skills: Strong communication skills to explain technical concepts to non-technical stakeholders. Proven ability to collaborate across teams, especially with marketing, product, and data analytics. Adaptable and proactive in learning and applying evolving technologies and regulatory changes.

Posted 2 weeks ago

Apply

5.0 - 9.0 years

12 - 18 Lacs

Hyderabad

Work from Office

Naukri logo

Job Description : Position: Sr.Data Engineer Experience: Minimum 7 years Location: Hyderabad Job Summary: What Youll Do Design and build efficient, reusable, and reliable data architecture leveraging technologies like Apache Flink, Spark, Beam and Redis to support large-scale, real-time, and batch data processing. Participate in architecture and system design discussions, ensuring alignment with business objectives and technology strategy, and advocating for best practices in distributed data systems. Independently perform hands-on development and coding of data applications and pipelines using Java, Scala, and Python, including unit testing and code reviews. Monitor key product and data pipeline metrics, identify root causes of anomalies, and provide actionable insights to senior management on data and business health. Maintain and optimize existing datalake infrastructure, lead migrations to lakehouse architectures, and automate deployment of data pipelines and machine learning feature engineering requests. Acquire and integrate data from primary and secondary sources, maintaining robust databases and data systems to support operational and exploratory analytics. Engage with internal stakeholders (business teams, product owners, data scientists) to define priorities, refine processes, and act as a point of contact for resolving stakeholder issues. Drive continuous improvement by establishing and promoting technical standards, enhancing productivity, monitoring, tooling, and adopting industry best practices. What Youll Bring Bachelors degree or higher in Computer Science, Engineering, or a quantitative discipline, or equivalent professional experience demonstrating exceptional ability. 7+ years of work experience in data engineering and platform engineering, with a proven track record in designing and building scalable data architectures. Extensive hands-on experience with modern data stacks, including datalake, lakehouse, streaming data (Flink, Spark), and AWS or equivalent cloud platforms. Cloud - AWS Apache Flink/Spark , Redis Database platform- Databricks. Proficiency in programming languages such as Java, Scala, and Python(Good to have) for data engineering and pipeline development. Expertise in distributed data processing and caching technologies, including Apache Flink, Spark, and Redis. Experience with workflow orchestration, automation, and DevOps tools (Kubernetes,git,Terraform, CI/CD). Ability to perform under pressure, managing competing demands and tight deadlines while maintaining high-quality deliverables. Strong passion and curiosity for data, with a commitment to data-driven decision making and continuous learning. Exceptional attention to detail and professionalism in report and dashboard creation. Excellent team player, able to collaborate across diverse functional groups and communicate complex technical concepts clearly. Outstanding verbal and written communication skills to effectively manage and articulate the health and integrity of data and systems to stakeholders. Please feel free to contact us: 9440806850 Email ID : careers@jayamsolutions.com

Posted 2 weeks ago

Apply

6.0 - 9.0 years

30 - 45 Lacs

Bengaluru

Work from Office

Naukri logo

Responsibilities: - Contribute and build an internal product library that is focused on solving business problems related to prediction & recommendation. Research unfamiliar methodologies, techniques to fine tune existing models in the product suite and, recommend better solutions and/or technologies. Improve features of the product to include newer machine learning algorithms in the likes of product recommendation, real time predictions, fraud detection, offer personalization etc Collaborate with client teams to on-board data, build models and score predictions. Participate in building automations and standalone applications around machine learning algorithms to enable a One Click solution to getting predictions and recommendations. Analyze large datasets, perform data wrangling operations, apply statistical treatments to filter and fine tune input data, engineer new features and eventually aid the process of building machine learning models. Run test cases to tune existing models for performance, check criteria and define thresholds for success by scaling the input data to multifold. Demonstrate a basic understanding of different machine learning concepts such as Regression, Classification, Matrix Factorization, K-fold Validations and different algorithms such as Decision Trees, Random Forrest, K-means clustering. Demonstrate working knowledge and contribute to building models using deep learning techniques, ensuring robust, scalable and high-performance solutions Minimum Qualifications: Education: Master's or PhD in a quantitative discipline (Statistics, Economics, Mathematics, Computer Science) is highly preferred. Deep Learning Mastery: Extensive experience with deep learning frameworks (TensorFlow, PyTorch, or Keras) and advanced deep learning projects across various domains, with a focus on multimodal data applications. Generative AI Expertise: Proven experience with generative AI models and techniques, such as RAG, VAEs, Transformers, and applications at scale in content creation or data augmentation. Programming and Big Data: Expert-level proficiency in Python and big data/cloud technologies (Databricks and Spark) with a minimum of 4-5 years of experience . Recommender Systems and Real-time Predictions: Expertise in developing sophisticated recommender systems, including the application of real-time prediction frameworks. Machine Learning Algorithms: In-depth experience with complex algorithms such as logistic regression, random forest, XGBoost, advanced neural networks, and ensemble methods. Experienced with machine learning algorithms such as logistic regression, random forest, XG boost, KNN, SVM, neural network, linear regression, lasso regression and k-means. Desirable Qualifications: Generative AI Tools Knowledge: Proficiency with tools and platforms for generative AI (such as OpenAI, Hugging Face Transformers). Databricks and Unity Catalog: Experience leveraging Databricks and Unity Catalog for robust data management, model deployment, and tracking. Working experience in CI/CD tools such as GIT & BitBucket

Posted 2 weeks ago

Apply

5.0 - 10.0 years

10 - 15 Lacs

Chennai, Bengaluru

Work from Office

Naukri logo

job requisition idJR1027452 Overall Responsibilities: Data Pipeline Development: Design, develop, and maintain highly scalable and optimized ETL pipelines using PySpark on the Cloudera Data Platform, ensuring data integrity and accuracy. Data Ingestion: Implement and manage data ingestion processes from a variety of sources (e.g., relational databases, APIs, file systems) to the data lake or data warehouse on CDP. Data Transformation and Processing: Use PySpark to process, cleanse, and transform large datasets into meaningful formats that support analytical needs and business requirements. Performance Optimization: Conduct performance tuning of PySpark code and Cloudera components, optimizing resource utilization and reducing runtime of ETL processes. Data Quality and Validation: Implement data quality checks, monitoring, and validation routines to ensure data accuracy and reliability throughout the pipeline. Automation and Orchestration: Automate data workflows using tools like Apache Oozie, Airflow, or similar orchestration tools within the Cloudera ecosystem. Monitoring and Maintenance: Monitor pipeline performance, troubleshoot issues, and perform routine maintenance on the Cloudera Data Platform and associated data processes. Collaboration: Work closely with other data engineers, analysts, product managers, and other stakeholders to understand data requirements and support various data-driven initiatives. Documentation: Maintain thorough documentation of data engineering processes, code, and pipeline configurations. Software : Advanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques. Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase. Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala). Familiarity with Hadoop, Kafka, and other distributed computing tools. Experience with Apache Oozie, Airflow, or similar orchestration frameworks. Strong scripting skills in Linux. Category-wise Technical Skills: PySpark: Advanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques. Cloudera Data Platform: Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase. Data Warehousing: Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala). Big Data Technologies: Familiarity with Hadoop, Kafka, and other distributed computing tools. Orchestration and Scheduling: Experience with Apache Oozie, Airflow, or similar orchestration frameworks. Scripting and Automation: Strong scripting skills in Linux. Experience: 5-12 years of experience as a Data Engineer, with a strong focus on PySpark and the Cloudera Data Platform. Proven track record of implementing data engineering best practices. Experience in data ingestion, transformation, and optimization on the Cloudera Data Platform. Day-to-Day Activities: Design, develop, and maintain ETL pipelines using PySpark on CDP. Implement and manage data ingestion processes from various sources. Process, cleanse, and transform large datasets using PySpark. Conduct performance tuning and optimization of ETL processes. Implement data quality checks and validation routines. Automate data workflows using orchestration tools. Monitor pipeline performance and troubleshoot issues. Collaborate with team members to understand data requirements. Maintain documentation of data engineering processes and configurations. Qualifications: Bachelors or Masters degree in Computer Science, Data Engineering, Information Systems, or a related field. Relevant certifications in PySpark and Cloudera technologies are a plus. Soft Skills: Strong analytical and problem-solving skills. Excellent verbal and written communication abilities. Ability to work independently and collaboratively in a team environment. Attention to detail and commitment to data quality.

Posted 2 weeks ago

Apply

4.0 - 9.0 years

7 - 17 Lacs

Hyderabad

Work from Office

Naukri logo

About this role: Wells Fargo is seeking a Senior Analytics Consultant with a proven track record of success preferably in the banking industry. In this role, you will: Consult, review and research moderately complex business, operational, and technical challenges that require an in-depth evaluation of variable data factors Perform moderately complex data analysis to support and drive strategic initiatives and business needs Develop a deep understanding of technical systems and business processes to extract data driven insights while identifying opportunities for engineering enhancements Lead or participate on large cross group projects Mentor less experienced staff Collaborate and consult with peers, colleagues, external contractors, and mid-level managers to resolve issues and achieve goals Leverage a solid understanding of compliance and risk management requirements for supported area. Required Qualifications: 4+ years of Analytics experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education Desired Qualifications: Strong analytical skills, proficiency in data manipulation tools like SQL, and the ability to translate complex data into actionable insights to inform strategic business decisions. Ability to collaborate closely with various stakeholders to develop and implement data strategies across the organization. Good to have certifications in Data Engineering. Domain understanding of home lending and its complete product lifecycle (Ops, Sales, Servicing, Risk) will be an added advantage. Strong understanding of business drivers and industry trends of lending products & home lending in specific. Detail oriented, results driven, and can navigate in a quickly changing and high demand environment while balancing multiple priorities. Expected to learn the business aspects quickly, multitask and prioritize between projects. Dedicated, enthusiastic, driven and performance-oriented; possesses a strong work ethic and good team player.

Posted 2 weeks ago

Apply

5.0 - 9.0 years

12 - 20 Lacs

Bengaluru

Work from Office

Naukri logo

Exprence 5-8 Years Location - Bangalore Mode C2H Hands on data engineering experience. Hands on experience with Python programming Hands-on Experience with AWS & EKS Working knowledge of Unix, Databases, SQL Working Knowledge on Databricks Working Knowledge on Airflow and DBT

Posted 2 weeks ago

Apply

5.0 - 10.0 years

11 - 15 Lacs

Gurugram

Work from Office

Naukri logo

Position Summary This is the Requisition for Employee Referrals Campaign and JD is Generic. We are looking for Associates with 5+ years of experience in delivering solutions around Data Engineering, Big data analytics and data lakes, MDM, BI, and data visualization. Experienced to Integrate and standardize structured and unstructured data to enable faster insights using cloud technology. Enabling data-driven insights across the enterprise. Job Responsibilities He/she should be able to design implement and deliver complex Data Warehousing/Data Lake, Cloud Data Management, and Data Integration project assignments. Technical Design and Development – Expertise in any of the following skills. Any ETL tools (Informatica, Talend, Matillion, Data Stage), andhosting technologies like the AWS stack (Redshift, EC2) is mandatory. Any BI toolsamong Tablau, Qlik & Power BI and MSTR. Informatica MDM, Customer Data Management. Expert knowledge of SQL with the capability to performance tune complex SQL queries in tradition and distributed RDDMS systems is must. Experience across Python, PySpark and Unix/Linux Shell Scripting. Project Managementis must to have. Should be able create simple to complex project plans in Microsoft Project Plan and think in advance about potential risks and mitigation plans as per project plan. Task Management – Should be able to onboard team on the project plan and delegate tasks to accomplish milestones as per plan. Should be comfortable in discussing and prioritizing work items with team members in an onshore-offshore model. Handle Client Relationship – Manage client communication and client expectations independently or with support of reporting manager. Should be able to deliver results back to the Client as per plan. Should have excellent communication skills. Education Bachelor of Technology Master's Equivalent - Engineering Work Experience Overall, 5- 7years of relevant experience inData Warehousing, Data management projects with some experience in the Pharma domain. We are hiring for following roles across Data management tech stacks - ETL toolsamong Informatica, IICS/Snowflake,Python& Matillion and other Cloud ETL. BI toolsamong Power BI and Tableau. MDM - Informatica/ Raltio, Customer Data Management. Azure cloud Developer using Data Factory and Databricks Data Modeler-Modelling of data - understanding source data, creating data models for landing, integration. Python/PySpark -Spark/ PySpark Design, Development, and Deployment

Posted 2 weeks ago

Apply

8.0 - 10.0 years

10 - 20 Lacs

Kolkata, Hyderabad, Pune

Work from Office

Naukri logo

Must have -Azure Data Factory (Mandatory). Azure Databricks, Pyspark and Python and advance SQL Azure eco-system. 1) Advanced SQL Skills. 2)Data Analysis. 3) Data Models. 4) Python (Desired). 5) Automation - Experience required : 8 to 10 years.

Posted 2 weeks ago

Apply

8.0 - 12.0 years

10 - 14 Lacs

Hyderabad

Work from Office

Naukri logo

ABOUT THE ROLE Role Description: We are seeking a seasoned Engineering Manager (Data Engineering) to lead the end-to-end management of enterprise data assets and operational data workflows. This role is critical in ensuring the availability, quality, consistency, and timeliness of data across platforms and functions, supporting analytics, reporting, compliance, and digital transformation initiatives. You will be responsible for the day-to-day data operations, manage a team of data professionals, and drive process excellence in data intake, transformation, validation, and delivery. You will work closely with cross-functional teams including data engineering, analytics, IT, governance, and business stakeholders to align operational data capabilities with enterprise needs. Roles & Responsibilities: Lead and manage the enterprise data operations team, responsible for data ingestion, processing, validation, quality control, and publishing to various downstream systems. Define and implement standard operating procedures for data lifecycle management, ensuring accuracy, completeness, and integrity of critical data assets. Oversee and continuously improve daily operational workflows, including scheduling, monitoring, and troubleshooting data jobs across cloud and on-premise environments. Establish and track key data operations metrics (SLAs, throughput, latency, data quality, incident resolution) and drive continuous improvements. Partner with data engineering and platform teams to optimize pipelines, support new data integrations, and ensure scalability and resilience of operational data flows. Collaborate with data governance, compliance, and security teams to maintain regulatory compliance, data privacy, and access controls. Serve as the primary escalation point for data incidents and outages, ensuring rapid response and root cause analysis. Build strong relationships with business and analytics teams to understand data consumption patterns, prioritize operational needs, and align with business objectives. Drive adoption of best practices for documentation, metadata, lineage, and change management across data operations processes. Mentor and develop a high-performing team of data operations analysts and leads. Functional Skills: Must-Have Skills: Experience managing a team of data engineers in biotech/pharma domain companies. Experience in designing and maintaining data pipelines and analytics solutions that extract, transform, and load data from multiple source systems. Demonstrated hands-on experience with cloud platforms (AWS) and the ability to architect cost-effective and scalable data solutions. Experience managing data workflows in cloud environments such as AWS, Azure, or GCP. Strong problem-solving skills with the ability to analyze complex data flow issues and implement sustainable solutions. Working knowledge of SQL, Python, or scripting languages for process monitoring and automation. Experience collaborating with data engineering, analytics, IT operations, and business teams in a matrixed organization. Familiarity with data governance, metadata management, access control, and regulatory requirements (e.g., GDPR, HIPAA, SOX). Excellent leadership, communication, and stakeholder engagement skills. Well versed with full stack development & DataOps automation, logging frameworks, and pipeline orchestration tools. Strong analytical and problem-solving skills to address complex data challenges. Effective communication and interpersonal skills to collaborate with cross-functional teams. Good-to-Have Skills: Data Engineering Management experience in Biotech/Life Sciences/Pharma Experience using graph databases such as Stardog or Marklogic or Neo4J or Allegrograph, etc. Education and Professional Certifications Doctorate Degree with 3-5 + years of experience in Computer Science, IT or related field OR Masters degree with 6 - 8 + years of experience in Computer Science, IT or related field OR Bachelors degree with 10 - 12 + years of experience in Computer Science, IT or related field AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills

Posted 2 weeks ago

Apply

5.0 - 8.0 years

22 - 30 Lacs

Noida, Hyderabad, Bengaluru

Hybrid

Naukri logo

Role: Data Engineer Exp: 5 to 8 Years Location: Bangalore, Noida, and Hyderabad (Hybrid, weekly 2 Days office must) NP: Immediate to 15 Days (Try to find only immediate joiners) Note: Candidate must have experience in Python, Kafka Streams, Pyspark, and Azure Databricks. Not looking for candidates who have only Exp in Pyspark and not in Python. Job Title: SSE Kafka, Python, and Azure Databricks (Healthcare Data Project) Experience: 5 to 8 years Role Overview: We are looking for a highly skilled with expertise in Kafka, Python, and Azure Databricks (preferred) to drive our healthcare data engineering projects. The ideal candidate will have deep experience in real-time data streaming, cloud-based data platforms, and large-scale data processing . This role requires strong technical leadership, problem-solving abilities, and the ability to collaborate with cross-functional teams. Key Responsibilities: Lead the design, development, and implementation of real-time data pipelines using Kafka, Python, and Azure Databricks . Architect scalable data streaming and processing solutions to support healthcare data workflows. Develop, optimize, and maintain ETL/ELT pipelines for structured and unstructured healthcare data. Ensure data integrity, security, and compliance with healthcare regulations (HIPAA, HITRUST, etc.). Collaborate with data engineers, analysts, and business stakeholders to understand requirements and translate them into technical solutions. Troubleshoot and optimize Kafka streaming applications, Python scripts, and Databricks workflows . Mentor junior engineers, conduct code reviews, and ensure best practices in data engineering . Stay updated with the latest cloud technologies, big data frameworks, and industry trends . Required Skills & Qualifications: 4+ years of experience in data engineering, with strong proficiency in Kafka and Python . Expertise in Kafka Streams, Kafka Connect, and Schema Registry for real-time data processing. Experience with Azure Databricks (or willingness to learn and adopt it quickly). Hands-on experience with cloud platforms (Azure preferred, AWS or GCP is a plus) . Proficiency in SQL, NoSQL databases, and data modeling for big data processing. Knowledge of containerization (Docker, Kubernetes) and CI/CD pipelines for data applications. Experience working with healthcare data (EHR, claims, HL7, FHIR, etc.) is a plus. Strong analytical skills, problem-solving mindset, and ability to lead complex data projects. Excellent communication and stakeholder management skills. Email: Sam@hiresquad.in

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies