Jobs
Interviews

8820 Hadoop Jobs - Page 46

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Introduction A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe. You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio Your Role And Responsibilities As an Associate Software Developer at IBM, you'll work with clients to co-create solutions to major real-world challenges by using best practice technologies, tools,techniques, and products to translate system requirements into the design anddevelopment of customized systems Preferred Education Master's Degree Required Technical And Professional Expertise Strong proficiency in Java, Spring Framework, Spring boot, RESTful APIs, excellent understanding of OOP, Design Patterns. Strong knowledge of ORM tools like Hibernate or JPA, Java based Micro-services framework, Hands on experience on Spring boot Microservices, Primary Skills: - Core Java, Spring Boot, Java2/EE, Microservices- Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc)- Spark Good to have Python. Strong knowledge of micro-service logging, monitoring, debugging and testing, In-depth knowledge of relational databases (e.g., MySQL) Experience in container platforms such as Docker and Kubernetes, experience in messaging platforms such as Kafka or IBM MQ, good understanding of Test-Driven-Development Familiar with Ant, Maven or other build automation for Java, Springboot, API, Microservices, Security Preferred Technical And Professional Experience Experience in Concurrent design and multi-threading Primary Skills: - Core Java, Spring Boot, Java2/EE, Microservices - Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc) - Spark Good to have Python.

Posted 2 weeks ago

Apply

6.0 - 7.0 years

15 - 17 Lacs

India

On-site

About The Opportunity This role is within the fast-paced enterprise technology and data engineering sector, delivering high-impact solutions in cloud computing, big data, and advanced analytics. We design, build, and optimize robust data platforms powering AI, BI, and digital products for leading Fortune 500 clients across industries such as finance, retail, and healthcare. As a Senior Data Engineer, you will play a key role in shaping scalable, production-grade data solutions with modern cloud and data technologies. Role & Responsibilities Architect and Develop Data Pipelines: Design and implement end-to-end data pipelines (ingestion → transformation → consumption) using Databricks, Spark, and cloud object storage. Data Warehouse & Data Mart Design: Create scalable data warehouses/marts that empower self-service analytics and machine learning workloads. Database Modeling & Optimization: Translate logical models into efficient physical schemas, ensuring optimal partitioning and performance management. ETL/ELT Workflow Automation: Build, automate, and monitor robust data ingestion and transformation processes with best practices in reliability and observability. Performance Tuning: Optimize Spark jobs and SQL queries through careful tuning of configurations, indexing strategies, and resource management. Mentorship and Continuous Improvement: Provide production support, mentor team members, and champion best practices in data engineering and DevOps methodology. Skills & Qualifications Must-Have 6-7 years of hands-on experience building production-grade data platforms, including at least 3 years with Apache Spark/Databricks. Expert proficiency in PySpark, Python, and advanced SQL with a record of performance tuning distributed jobs. Proven expertise in data modeling, data warehouse/mart design, and managing ETL/ELT pipelines using tools like Airflow or dbt. Hands-on experience with major cloud platforms such as AWS or Azure, and familiarity with modern lakehouse/data-lake patterns. Strong analytical, problem-solving, and mentoring skills with a DevOps mindset and commitment to code quality. Preferred Experience with AWS analytics services (Redshift, Glue, S3) or the broader Hadoop ecosystem. Bachelor's or Master's degree in Computer Science, Engineering, or a related field. Exposure to streaming pipelines (Kafka, Kinesis, Delta Live Tables) and real-time analytics solutions. Familiarity with ML feature stores, MLOps workflows, or data governance frameworks. Relevant certifications (Databricks, AWS, Azure) or active contributions to open source projects. Location: India | Employment Type: Fulltime Skills: agile methodologies,team leadership,performance tuning,sql,elt,airflow,aws,data modeling,apache spark,pyspark,data,hadoop,databricks,python,dbt,big data technologies,etl,azure

Posted 2 weeks ago

Apply

7.0 years

15 - 17 Lacs

India

Remote

Note: This is a remote role with occasional office visits. Candidates from Mumbai or Pune will be preferred About The Company A fast-growing enterprise technology consultancy operating at the intersection of cloud computing, big-data engineering and advanced analytics . The team builds high-throughput, real-time data platforms that power AI, BI and digital products for Fortune 500 clients across finance, retail and healthcare. By combining Databricks Lakehouse architecture with modern DevOps practices, they unlock insight at petabyte scale while meeting stringent security and performance SLAs. Role & Responsibilities Architect end-to-end data pipelines (ingestion → transformation → consumption) using Databricks, Spark and cloud object storage. Design scalable data warehouses/marts that enable self-service analytics and ML workloads. Translate logical data models into physical schemas; own database design, partitioning and lifecycle management for cost-efficient performance. Implement, automate and monitor ETL/ELT workflows, ensuring reliability, observability and robust error handling. Tune Spark jobs and SQL queries, optimizing cluster configurations and indexing strategies to achieve sub-second response times. Provide production support and continuous improvement for existing data assets, championing best practices and mentoring peers. Skills & Qualifications Must-Have 6–7 years building production-grade data platforms, including 3 years+ hands-on Apache Spark/Databricks experience. Expert proficiency in PySpark, Python and advanced SQL, with a track record of performance-tuning distributed jobs. Demonstrated ability to model data warehouses/marts and orchestrate ETL/ELT pipelines with tools such as Airflow or dbt. Hands-on with at least one major cloud platform (AWS or Azure) and modern lakehouse / data-lake patterns. Strong problem-solving skills, DevOps mindset and commitment to code quality; comfortable mentoring fellow engineers. Preferred Deep familiarity with the AWS analytics stack (Redshift, Glue, S3) or the broader Hadoop ecosystem. Bachelor’s or Master’s degree in Computer Science, Engineering or a related field. Experience building streaming pipelines (Kafka, Kinesis, Delta Live Tables) and real-time analytics solutions. Exposure to ML feature stores, MLOps workflows and data-governance/compliance frameworks. Relevant professional certifications (Databricks, AWS, Azure) or notable open-source contributions. Benefits & Culture Highlights Remote-first & flexible hours with 25+ PTO days and comprehensive health cover. Annual training budget & certification sponsorship (Databricks, AWS, Azure) to fuel continuous learning. Inclusive, impact-focused culture where engineers shape the technical roadmap and mentor a vibrant data community Skills: data modeling,big data technologies,team leadership,aws,data,sql,agile methodologies,performance tuning,elt,airflow,apache spark,pyspark,hadoop,databricks,python,dbt,etl,azure

Posted 2 weeks ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Our vision is to transform how the world uses information to enrich life for all . Micron Technology is a world leader in innovating memory and storage solutions that accelerate the transformation of information into intelligence, inspiring the world to learn, communicate and advance faster than ever. Responsibilities Include, But Not Limited To Strong desire to grow a career as a Data Scientist in highly automated industrial manufacturing doing analysis and machine learning on terabytes and petabytes of diverse datasets. Experience in the areas: statistical modeling, feature extraction and analysis, supervised/unsupervised/semi-supervised learning. Exposure to the semiconductor industry is a plus but not a requirement. Ability to extract data from different databases via SQL and other query languages and applying data cleansing, outlier identification, and missing data techniques. Strong software development skills. Strong verbal and written communication skills. Experience with or desire to learn: Machine learning and other advanced analytical methods Fluency in Python and/or R pySpark and/or SparkR and/or SparklyR Hadoop (Hive, Spark, HBase) Teradata and/or another SQL databases Tensorflow, and/or other statistical software including scripting capability for automating analyses SSIS, ETL Javascript, AngularJS 2.0, Tableau Experience working with time-series data, images, semi-supervised learning, and data with frequently changing distributions is a plus Experience working with Manufacturing Execution Systems (MES) is a plus Existing papers from CVPR, NIPS, ICML, KDD, and other key conferences are plus, but this is not a research position About Micron Technology, Inc. We are an industry leader in innovative memory and storage solutions transforming how the world uses information to enrich life for all . With a relentless focus on our customers, technology leadership, and manufacturing and operational excellence, Micron delivers a rich portfolio of high-performance DRAM, NAND, and NOR memory and storage products through our Micron® and Crucial® brands. Every day, the innovations that our people create fuel the data economy, enabling advances in artificial intelligence and 5G applications that unleash opportunities — from the data center to the intelligent edge and across the client and mobile user experience. To learn more, please visit micron.com/careers All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status. To request assistance with the application process and/or for reasonable accommodations, please contact hrsupport_india@micron.com Micron Prohibits the use of child labor and complies with all applicable laws, rules, regulations, and other international and industry labor standards. Micron does not charge candidates any recruitment fees or unlawfully collect any other payment from candidates as consideration for their employment with Micron. AI alert : Candidates are encouraged to use AI tools to enhance their resume and/or application materials. However, all information provided must be accurate and reflect the candidate's true skills and experiences. Misuse of AI to fabricate or misrepresent qualifications will result in immediate disqualification. Fraud alert: Micron advises job seekers to be cautious of unsolicited job offers and to verify the authenticity of any communication claiming to be from Micron by checking the official Micron careers website in the About Micron Technology, Inc.

Posted 2 weeks ago

Apply

3.0 years

4 Lacs

Delhi

On-site

Job Description: Hadoop & ETL Developer Location: Shastri Park, Delhi Experience: 3+ years Education: B.E./ B.Tech/ MCA/ MSC (IT or CS) / MS Salary: Upto 80k (rest depends on interview and the experience) Notice Period: Immediate joiner to 20 days of joiners Candidates from Delhi/ NCR will only be preferred Job Summary:- We are looking for a Hadoop & ETL Developer with strong expertise in big data processing, ETL pipelines, and workflow automation. The ideal candidate will have hands-on experience in the Hadoop ecosystem, including HDFS, MapReduce, Hive, Spark, HBase, and PySpark, as well as expertise in real-time data streaming and workflow orchestration. This role requires proficiency in designing and optimizing large-scale data pipelines to support enterprise data processing needs. Key Responsibilities Design, develop, and optimize ETL pipelines leveraging Hadoop ecosystem technologies. Work extensively with HDFS, MapReduce, Hive, Sqoop, Spark, HBase, and PySpark for data processing and transformation. Implement real-time and batch data ingestion using Apache NiFi, Kafka, and Airbyte. Develop and manage workflow orchestration using Apache Airflow. Perform data integration across structured and unstructured data sources, including MongoDB and Hadoop-based storage. Optimize MapReduce and Spark jobs for performance, scalability, and efficiency. Ensure data quality, governance, and consistency across the pipeline. Collaborate with data engineering teams to build scalable and high-performance data solutions. Monitor, debug, and enhance big data workflows to improve reliability and efficiency. Required Skills & Experience : 3+ years of experience in Hadoop ecosystem (HDFS, MapReduce, Hive, Sqoop, Spark, HBase, PySpark). Strong expertise in ETL processes, data transformation, and data warehousing. Hands-on experience with Apache NiFi, Kafka, Airflow, and Airbyte. Proficiency in SQL and handling structured and unstructured data. Experience with NoSQL databases like MongoDB. Strong programming skills in Python or Scala for scripting and automation. Experience in optimizing Spark and MapReduce jobs for high-performance computing. Good understanding of data lake architectures and big data best practices. Preferred Qualifications Experience in real-time data streaming and processing. Familiarity with Docker/Kubernetes for deployment and orchestration. Strong analytical and problem-solving skills with the ability to debug and optimize data workflows. If you have a passion for big data, ETL, and large-scale data processing, we’d love to hear from you! Job Types: Full-time, Contractual / Temporary Pay: From ₹400,000.00 per year Work Location: In person

Posted 2 weeks ago

Apply

5.0 - 9.0 years

3 - 9 Lacs

No locations specified

On-site

Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Sr Associate IS Architect What you will do Let’s do this. Let’s change the world. In this vital role you will be responsible for designing, building, maintaining, analyzing, and interpreting data to deliver actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and performing data governance initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has deep technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Design, develop, and maintain data solutions for data generation, collection, and processing Be a key team member that assists in design and development of the data pipeline Standup and enhance BI reporting capabilities through Cognos, PowerBI or similar tools. Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Collaborate with multi-functional teams to understand data requirements and design solutions that meet business needs Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate and communicate effectively with product teams Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementatio What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master's degree / Bachelor's degree with 5- 9 years of experience in Computer Science, IT or related field Functional Skills: Must-Have Skills Proficiency in Python, PySpark, and Scala for data processing and ETL (Extract, Transform, Load) workflows, with hands-on experience in using Databricks for building ETL pipelines and handling big data processing Experience with data warehousing platforms such as Amazon Redshift, or Snowflake. Strong knowledge of SQL and experience with relational (e.g., PostgreSQL, MySQL) databases. Familiarity with big data frameworks like Apache Hadoop, Spark, and Kafka for handling large datasets. Experience in BI reporting tools such as Cognos, PowerBI and/or Tableau Experienced with software engineering best-practices, including but not limited to version control (GitLab, Subversion, etc.), CI/CD (Jenkins, GITLab etc.), automated unit testing, and Dev Ops Good-to-Have Skills: Experience with cloud platforms such as AWS particularly in data services (e.g., EKS, EC2, S3, EMR, RDS, Redshift/Spectrum, Lambda, Glue, Athena) Experience with Anaplan platform, including building, managing, and optimizing models and workflows including scalable data integrations Understanding of machine learning pipelines and frameworks for ML/AI models Professional Certifications: AWS Certified Data Engineer (preferred) Databricks Certified (preferred) Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About the role Refer to responsibilities You will be responsible for Job Summary: Build solutions for the real-world problems in workforce management for retail. You will work with a team of highly skilled developers and product managers throughout the entire software development life cycle of the products we own. In this role you will be responsible for designing, building, and maintaining our big data pipelines. Your primary focus will be on developing data pipelines using available tec hnologies. In this job, I’m accountable for: Following our Business Code of Conduct and always acting with integrity and due diligence and have these specific risk responsibilities: -Represent Talent Acquisition in all forums/ seminars pertaining to process, compliance and audit -Perform other miscellaneous duties as required by management -Driving CI culture, implementing CI projects and innovation for withing the team -Design and implement scalable and reliable data processing pipelines using Spark/Scala/Python &Hadoop ecosystem. -Develop and maintain ETL processes to load data into our big data platform. -Optimize Spark jobs and queries to improve performance and reduce processing time. -Working with product teams to communicate and translate needs into technical requirements. -Design and develop monitoring tools and processes to ensure data quality and availability. -Collaborate with other teams to integrate data processing pipelines into larger systems. -Delivering high quality code and solutions, bringing solutions into production. -Performing code reviews to optimise technical performance of data pipelines. -Continually look for how we can evolve and improve our technology, processes, and practices. -Leading group discussions on system design and architecture. -Manage and coach individuals, providing regular feedback and career development support aligned with business goals. -Allocate and oversee team workload effectively, ensuring timely and high-quality outputs. -Define and streamline team workflows, ensuring consistent adherence to SLAs and data governance practices. -Monitor and report key performance indicators (KPIs) to drive continuous improvement in delivery efficiency and system uptime. -Oversee resource allocation and prioritization, aligning team capacity with project and business demands. Key people and teams I work with in and outside of Tesco: People, budgets and other resources I am accountable for in my job: TBS & Tesco Senior Management TBS Reporting Team Tesco UK / ROI/ Central Europe Any other accountabilities by the business Business stakeholders Operational skills relevant for this job: Experience relevant for this job: Skills: ETL, YARN,Spark, Hive,Hadoop,PySpark/Python • 7+ years of experience inbuilding and maintaining big data (anyone) Linux/Unix/Shell environments(anyone), Query platforms using Spark/Scala. optimisation • Strong knowledge of distributed computing principles and big Good to have: Kafka, restAPI/reporting tools. data technologies such as Hadoop, Spark, Streaming etc. • Experience with ETL processes and data modelling. • Problem-solving and troubleshooting skills. • Working knowledge on Oozie/Airflow. • Experience in writing unit test cases, shell scripting. • Ability to work independently and as part of a team in a fast-paced environment. You will need Refer to responsibilities Whats in it for you? At Tesco, we are committed to providing the best for you. As a result, our colleagues enjoy a unique, differentiated, market- competitive reward package, based on the current industry practices, for all the work they put into serving our customers, communities and planet a little better every day. Our Tesco Rewards framework consists of pillars - Fixed Pay, Incentives, and Benefits. Total Rewards offered at Tesco is determined by four principles - simple, fair, competitive, and sustainable. Salary - Your fixed pay is the guaranteed pay as per your contract of employment. Performance Bonus - Opportunity to earn additional compensation bonus based on performance, paid annually Leave & Time-off - Colleagues are entitled to 30 days of leave (18 days of Earned Leave, 12 days of Casual/Sick Leave) and 10 national and festival holidays, as per the company’s policy. Making Retirement Tension-FreeSalary - In addition to Statutory retirement beneets, Tesco enables colleagues to participate in voluntary programmes like NPS and VPF. Health is Wealth - Tesco promotes programmes that support a culture of health and wellness including insurance for colleagues and their family. Our medical insurance provides coverage for dependents including parents or in-laws. Mental Wellbeing - We offer mental health support through self-help tools, community groups, ally networks, face-to-face counselling, and more for both colleagues and dependents. Financial Wellbeing - Through our financial literacy partner, we offer one-to-one financial coaching at discounted rates, as well as salary advances on earned wages upon request. Save As You Earn (SAYE) - Our SAYE programme allows colleagues to transition from being employees to Tesco shareholders through a structured 3-year savings plan. Physical Wellbeing - Our green campus promotes physical wellbeing with facilities that include a cricket pitch, football field, badminton and volleyball courts, along with indoor games, encouraging a healthier lifestyle. About Us Tesco in Bengaluru is a multi-disciplinary team serving our customers, communities, and planet a little better every day across markets. Our goal is to create a sustainable competitive advantage for Tesco by standardising processes, delivering cost savings, enabling agility through technological solutions, and empowering our colleagues to do even more for our customers. With cross-functional expertise, a wide network of teams, and strong governance, we reduce complexity, thereby offering high-quality services for our customers. Tesco in Bengaluru, established in 2004 to enable standardisation and build centralised capabilities and competencies, makes the experience better for our millions of customers worldwide and simpler for over 3,30,000 colleagues. Tesco Business Solutions: Established in 2017, Tesco Business Solutions (TBS) has evolved from a single entity traditional shared services in Bengaluru, India (from 2004) to a global, purpose-driven solutions-focused organisation. TBS is committed to driving scale at speed and delivering value to the Tesco Group through the power of decision science. With over 4,400 highly skilled colleagues globally, TBS supports markets and business units across four locations in the UK, India, Hungary, and the Republic of Ireland. The organisation underpins everything that the Tesco Group does, bringing innovation, a solutions mindset, and agility to its operations and support functions, building winning partnerships across the business. TBS's focus is on adding value and creating impactful outcomes that shape the future of the business. TBS creates a sustainable competitive advantage for the Tesco Group by becoming the partner of choice for talent, transformation, and value creation.

Posted 2 weeks ago

Apply

10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Title: Senior Software Engineer Experience: 10+ Years Top Skills: Java, Spring, Scala, AWS, Spark, SQL Work Mode: Hybrid - 3 days from the office Work Location: Marathahalli, Bangalore. Employer: Global Product Company - Established 1969 Why Join Us? Be part of a global product company with over 50 years of innovation. Work in a collaborative and growth-oriented environment. Help shape the future of digital products in a rapidly evolving industry. Required Job Skills and Abilities: 10+ years’ experience in designing and developing enterprise level software solutions 3 years’ experience developing Scala / Java applications and microservices using Spring Boot 7 years’ experience with large volume data processing and big data tools such as Apache Spark, SQL, Scala, and Hadoop technologies 5 years’ experience with SQL and Relational databases 2 year Experience working with the Agile/Scrum methodology

Posted 2 weeks ago

Apply

4.0 - 6.0 years

0 Lacs

Pune, Maharashtra

On-site

ITPune Corporate Office - Mantri Posted On 27 Jul 2025 End Date 27 Jul 2026 Required Experience 4 - 6 Years BASIC SECTION Job Level GB04 Job Title Senior Data Engineer, ATG, Data Technology - Delivery Job Location Country India State MAHARASHTRA Region West City Pune Location Name Pune Corporate Office - Mantri Tier Tier 1 Skills SKILL SKILLS AS PER JD Minimum Qualification OTHERS JOB DESCRIPTION Job Purpose The Senior Data Engineer will be responsible for designing, building, and maintaining scalable and efficient data pipelines and architectures for the Enterprise Data Platform. This role will focus on enabling high-quality, reliable, and timely data access for analytics, reporting, and business decision-making. Working closely with business analysts, data scientists, and architects, the Senior Data Engineer will ensure data solutions meet business needs and adhere to best practices and governance standards. Duties and Responsibilities Design and implement robust, scalable, and high-performance data pipelines and ETL/ELT processes. Develop, optimize, and maintain data architectures including databases, data lakes, and data warehouses. Ensure the quality, integrity, and security of data through robust data validation and data quality frameworks. Collaborate with business analysts and stakeholders to understand business data requirements and translate them into technical designs. Work closely with data architects to align with enterprise architecture standards and strategies. Implement data integration solutions with various internal and external data sources. Monitor, troubleshoot, and optimize system performance and data workflows. Support the migration of on-premise data solutions to cloud-based environments (e.g., AWS, Azure, GCP). Stay up to date with the latest industry trends and technologies in data engineering and recommend innovative solutions. Create and maintain comprehensive documentation for all developed data pipelines and systems. Mentor junior data engineers and contribute to the development of best practices. Key Decisions / Dimensions Selecting appropriate technologies, tools, and frameworks for data pipeline development. Designing data models and database schemas that optimize for both performance and scalability. Establishing standards for code quality, data validation, and monitoring processes. Identifying performance bottlenecks and recommending architectural improvements. Major Challenges Managing and processing large volumes of structured and unstructured data with efficiency. Designing systems that can handle scaling needs as business requirements and data volumes grow. Balancing the need for quick delivery with the necessity for scalable and maintainable code. Ensuring data quality and compliance with data governance and security policies. Integrating disparate data sources with differing formats and standards into unified models. Required Qualifications and Experience a) Qualifications Bachelors Degree in Computer Engineering, Computer Science, Information Technology, or a related field. Professional certifications such as Google Professional Data Engineer, AWS Certified Data Analytics Specialty, or Microsoft Certified: Azure Data Engineer Associate are a plus. b) Work Experience Minimum of 4+ years of experience in data engineering or a related role. Strong expertise in building and optimizing ETL/ELT pipelines and data workflows. Proficient in programming languages such as Python, Java, or Scala. Hands-on experience with SQL and relational database systems (e.g., PostgreSQL, SQL Server, MySQL). Experience with big data technologies (e.g., Hadoop, Spark, Kafka). Familiarity with cloud platforms (AWS, Azure, GCP) and cloud-native data services (e.g., Redshift, BigQuery, Snowflake, Databricks). Solid understanding of data modeling, data warehousing concepts, and best practices. Knowledge of CI/CD pipelines and infrastructure-as-code (IaC) is a plus. Strong problem-solving skills and the ability to work independently or in a team. c) Skills Keywords Data Architecture Delivery Management Project Management Cloud Data Platforms (e.g., Azure, AWS, GCP) Data Modeling Data Governance Stakeholder Management Quality Assurance Agile Methodology Team Leadership Budget Management Risk Management Data Integration Scalable Data Solutions

Posted 2 weeks ago

Apply

4.0 - 6.0 years

0 Lacs

Pune, Maharashtra

On-site

ITPune Corporate Office - Mantri Posted On 27 Jul 2025 End Date 27 Jul 2026 Required Experience 4 - 6 Years BASIC SECTION Job Level GB04 Job Title Senior Data Engineer, ATG, Data Technology - Delivery Job Location Country India State MAHARASHTRA Region West City Pune Location Name Pune Corporate Office - Mantri Tier Tier 1 Skills SKILL SKILLS AS PER JD Minimum Qualification OTHERS JOB DESCRIPTION Job Purpose The Senior Data Engineer will be responsible for designing, building, and maintaining scalable and efficient data pipelines and architectures for the Enterprise Data Platform. This role will focus on enabling high-quality, reliable, and timely data access for analytics, reporting, and business decision-making. Working closely with business analysts, data scientists, and architects, the Senior Data Engineer will ensure data solutions meet business needs and adhere to best practices and governance standards Duties and Responsibilities Design and implement robust, scalable, and high-performance data pipelines and ETL/ELT processes. Develop, optimize, and maintain data architectures including databases, data lakes, and data warehouses. Ensure the quality, integrity, and security of data through robust data validation and data quality frameworks. Collaborate with business analysts and stakeholders to understand business data requirements and translate them into technical designs. Work closely with data architects to align with enterprise architecture standards and strategies. Implement data integration solutions with various internal and external data sources. Monitor, troubleshoot, and optimize system performance and data workflows. Support the migration of on-premise data solutions to cloud-based environments (e.g., AWS, Azure, GCP). Stay up to date with the latest industry trends and technologies in data engineering and recommend innovative solutions. Create and maintain comprehensive documentation for all developed data pipelines and systems. Mentor junior data engineers and contribute to the development of best practices. Key Decisions / Dimensions Selecting appropriate technologies, tools, and frameworks for data pipeline development. Designing data models and database schemas that optimize for both performance and scalability. Establishing standards for code quality, data validation, and monitoring processes. Identifying performance bottlenecks and recommending architectural improvements. Major Challenges Managing and processing large volumes of structured and unstructured data with efficiency. Designing systems that can handle scaling needs as business requirements and data volumes grow. Balancing the need for quick delivery with the necessity for scalable and maintainable code. Ensuring data quality and compliance with data governance and security policies. Integrating disparate data sources with differing formats and standards into unified models. Required Qualifications and Experience a) Qualifications Bachelors Degree in Computer Engineering, Computer Science, Information Technology, or a related field. Professional certifications such as Google Professional Data Engineer, AWS Certified Data Analytics Specialty, or Microsoft Certified: Azure Data Engineer Associate are a plus. b) Work Experience Minimum of 4+ years of experience in data engineering or a related role. Strong expertise in building and optimizing ETL/ELT pipelines and data workflows. Proficient in programming languages such as Python, Java, or Scala. Hands-on experience with SQL and relational database systems (e.g., PostgreSQL, SQL Server, MySQL). Experience with big data technologies (e.g., Hadoop, Spark, Kafka). Familiarity with cloud platforms (AWS, Azure, GCP) and cloud-native data services (e.g., Redshift, BigQuery, Snowflake, Databricks). Solid understanding of data modeling, data warehousing concepts, and best practices. Knowledge of CI/CD pipelines and infrastructure-as-code (IaC) is a plus. Strong problem-solving skills and the ability to work independently or in a team. c) Skills Keywords Data Architecture Delivery Management Project Management Cloud Data Platforms (e.g., Azure, AWS, GCP) Data Modeling Data Governance Stakeholder Management Quality Assurance Agile Methodology Team Leadership Budget Management Risk Management Data Integration Scalable Data Solutions

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

The healthcare industry presents a significant opportunity for software development, and Health Catalyst stands out as a leading company in this domain. By joining our team, you have the chance to contribute to solving critical healthcare challenges at a national level, impacting the lives of millions. At Health Catalyst, we value individuals who are intelligent, hardworking, and humble, and we are committed to developing innovative tools to enhance healthcare performance, cost-efficiency, and quality. As a Data Engineer at Health Catalyst, your primary focus will be on acquiring data from various sources within a Health Systems ecosystem. Leveraging Catalyst's Data Operating System, you will work closely with both technical and business aspects of the source systems, utilizing multiple technologies to extract the necessary data. Key Responsibilities include: - Proficiency in Structured Query Language (SQL) and experience with EMR/EHR systems - Leading the design, development, and maintenance of scalable data pipelines and ETL processes - Strong expertise in ETL tools and database principles - Excellent analytical and troubleshooting skills, with a strong customer service orientation - Mentoring and guiding a team of data engineers to foster continuous learning and improvement - Monitoring and resolving data infrastructure issues to ensure high availability and performance - Ensuring data quality, integrity, and security across all data platforms - Implementing best practices for data governance, lineage, and compliance Desired Skills: - Experience with RDBMS (SQL Server, Oracle, etc.) and Stored Procedure/T-SQL/SSIS - Familiarity with processing HL7 messages, CCD documents, and EDI X12 Claims files - Knowledge of Agile development methodologies and the ability to work with technologies related to data acquisition - Proficiency in Hadoop and other Big Data Technologies - Experience with Microsoft Azure cloud solutions, architecture, and related technologies Education & Experience: - Bachelor's degree in technology, business, or a healthcare-related field - Minimum of 5 years of experience in data engineering, with at least 2 years in a leadership role - 2+ years of experience in the healthcare/technology industry If you are passionate about leveraging your expertise in data engineering to make a meaningful impact in the healthcare sector, we encourage you to apply and be a part of our dynamic and innovative team at Health Catalyst.,

Posted 2 weeks ago

Apply

9.0 - 13.0 years

0 Lacs

haryana

On-site

The role of a Data Scientist, Risk Data Analytics at Fidelity International involves taking a leading role in developing Data Science and Advanced Analytics solutions for the business. This includes engaging with key stakeholders in the Global Risk Team to understand various subject areas such as Investment Risk, Non-Financial Risk, Enterprise Risk, Model Risk, and Enterprise Resilience. The Data Scientist will implement advanced analytics solutions on On-Premises/Cloud platforms, develop proof-of-concepts, and collaborate with internal and external teams to progress these concepts to production. Additionally, they will work on maximizing the adoption of Cloud Based advanced analytics solutions by building sandbox analytics environments and supporting delivered models and infrastructure on AWS. The Data Scientist will be responsible for developing and delivering Data Science solutions for the business, partnering with internal and external ecosystem to design and deliver advanced analytics-enabled solutions. They will create advanced analytics solutions on quantitative and text data using Artificial Intelligence, Machine Learning, and NLP techniques, as well as compelling visualizations for customer benefit. Stakeholder management is a key aspect of the role, involving working with Risk SMEs/Managers, stakeholders, and sponsors to understand business problems and translate them into appropriate analytics solutions. The Data Scientist will engage with key stakeholders for the smooth execution, delivery, implementation, and maintenance of solutions. Moreover, the Data Scientist will focus on the adoption of Cloud-enabled Data Science solutions by maximizing adoption of Cloud Based advanced analytics solutions, building sandbox analytics environments, and deploying solutions in production while adhering to best practices. Collaboration and Ownership are essential, which includes sharing knowledge and best practices with the team, providing mentoring, coaching, and consulting advice to staff, and taking complete independent ownership of projects and initiatives in the team with minimal support. The ideal candidate for this role should have a strong educational background with qualifications like an Engineer from IIT/Masters in a field related to Data Science/Economics/Mathematics/M.B.A from tier 1 institutions. They should have a minimum of 9 years of experience in Data Science and Analytics, with hands-on experience in Statistical Modelling, Machine Learning Techniques, Natural Language Processing, Deep Learning, and Python. The candidate should possess excellent problem-solving skills, the ability to run analytics applications, interpret statistical results, and implement models with clear measurable outcomes. Additionally, experience with SPARK/Hadoop/Big Data Platforms, unstructured data, big data, and primary market research is beneficial. Fidelity International offers a comprehensive benefits package, values employee wellbeing, supports development, and promotes flexible working arrangements. The organization is committed to making employees feel motivated by their work and happy to be part of the team. If you are looking to build your future in a dynamic and innovative environment, consider joining Fidelity International's Data Value team. Visit careers.fidelityinternational.com for more information on opportunities to be a part of the team.,

Posted 2 weeks ago

Apply

18.0 - 22.0 years

0 Lacs

chennai, tamil nadu

On-site

As the General Manager/Director of Engineering for India Operations at our organization, your primary responsibility will be overseeing the product engineering, quality assurance, and product support functions. You will play a crucial role in ensuring the delivery of high-quality software products that meet SLAs, are delivered on time, and are within budget. Collaborating with the CTO and other team members, you will help develop a long-term product plan for client products and manage the release planning cycles for all products. A key aspect of your role will involve resource management and ensuring that each product team has the necessary skilled resources to meet deliverables. You will also be responsible for developing and managing a skills escalation and promotion path for the product engineering organization, as well as implementing tools and processes to optimize product engineering throughput and quality. Key Result Areas (KRAs) for this role include working effectively across multiple levels in the organization and in a global setting, ensuring key milestones are met, delivering high-quality solutions, meeting project timelines and SLAs, maintaining customer satisfaction, ensuring controlled releases to production, and aligning personnel with tasks effectively. Additionally, you will need to have a deep understanding of our products, their interrelationships, and relevance to the business to ensure their availability and stability. To qualify for this role, you should have a Bachelor's degree in Computer Science/Engineering from premier institutes, with an MBA being preferred. You should have at least 18 years of software development experience, including 10+ years in a managerial capacity. Strong knowledge of the software development process, hands-on implementation experience, leadership experience in an early-stage start-up, familiarity with mobile technologies, and professional experience with interactive languages and technologies such as FLEX, PHP, HTML5, MYSQL, and MONGODB are desired. Experience with Agile Methodology and on-site experience working in the US would be advantageous. In summary, as the General Manager/Director of Engineering for India Operations, you will be instrumental in driving the success of our product engineering efforts, ensuring high-quality deliverables, and optimizing processes to meet business objectives effectively. If you are interested in this exciting opportunity, please reach out to us at jobs@augustainfotech.com. (Note: This Job Description is a standard summary and should be written in second person format without any headers),

Posted 2 weeks ago

Apply

10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Role Overview We are looking for an experienced Solution Architect AI/ML & Data Engineering to lead the design and delivery of advanced data and AI/ML solutions for our clients. Responsibilities The ideal candidate will have a strong background in end-to-end data architecture, AI lifecycle management, cloud technologies, and emerging Generative AI Responsibilities : Collaborate with clients to understand business requirements and design robust data solutions. Lead the development of end-to-end data pipelines including ingestion, storage, processing, and visualization. Architect scalable, secure, and compliant data systems following industry best practices. Guide data engineers, analysts, and cross-functional teams to ensure timely delivery of solutions. Participate in pre-sales efforts: solution design, proposal creation, and client presentations. Act as a technical liaison between clients and internal teams throughout the project lifecycle. Stay current with emerging technologies in AI/ML, data platforms, and cloud services. Foster long-term client relationships and identify opportunities for business expansion. Understand and architect across the full AI lifecyclefrom ingestion to inference and operations. Provide hands-on guidance for containerization and deployment using Kubernetes. Ensure proper implementation of data governance, modeling, and warehousing : Bachelors or masters degree in computer science, Data Science, or related field. 10+ years of experience as a Data Solution Architect or similar role. Deep technical expertise in data architecture, engineering, and AI/ML systems. Strong experience with Hadoop-based platforms, ideally Cloudera Data Platform or Data Fabric. Proven pre-sales experience: technical presentations, solutioning, and RFP support. Proficiency in cloud platforms (Azure preferred; also, AWS or GCP) and cloud-native data tools. Exposure to Generative AI frameworks and LLMs like OpenAI and Hugging Face. Experience in deploying and managing applications on Kubernetes (AKS, EKS, GKE). Familiarity with data governance, data modeling, and large-scale data warehousing. Excellent problem-solving, communication, and client-facing & Technology Architecture & Engineering: Hadoop Ecosystem: Cloudera Data Platform, Data Fabric, HDFS, Hive, Spark, HBase, Oozie. ETL & Integration: Apache NiFi, Talend, Informatica, Azure Data Factory, AWS Glue. Warehousing: Azure Synapse, Redshift, BigQuery, Snowflake, Teradata, Vertica. Streaming: Apache Kafka, Azure Event Hubs, AWS Platforms: Azure (preferred), AWS, GCP. Data Lakes: ADLS, AWS S3, Google Cloud Platforms: Data Fabric, AI Essentials, Unified Analytics, MLDM, MLDE. AI/ML & GenAI Lifecycle Tools: MLflow, Kubeflow, Azure ML, SageMaker, Ray. Inference: TensorFlow Serving, KServe, Seldon. Generative AI: Hugging Face, LangChain, OpenAI API (GPT-4, etc. DevOps & Deployment Kubernetes: AKS, EKS, GKE, Open Source K8s, Helm. CI/CD: Jenkins, GitHub Actions, GitLab CI, Azure DevOps. (ref:hirist.tech)

Posted 2 weeks ago

Apply

7.0 - 11.0 years

0 Lacs

pune, maharashtra

On-site

You will be joining Atgeir Solutions, a leading innovator in technology, renowned for its commitment to excellence. As a Technical Lead specializing in Big Data and Cloud technologies, you will have the opportunity for advancement to the role of Technical Architect. Your responsibilities will include leveraging your expertise in Big Data and Cloud technologies to contribute to the design, development, and implementation of complex systems. You will lead and inspire a team of professionals, offering technical guidance and mentorship to foster a collaborative and innovative work environment. In addition, you will be tasked with solving intricate technical challenges and guiding your team in overcoming obstacles in Big Data and Cloud environments. Investing in the growth and development of your team members will be crucial, including identifying training needs, organizing knowledge-sharing sessions, and promoting a culture of continuous learning. Collaboration with stakeholders, such as clients, architects, and other leads, will be essential to understand requirements and align technology strategies with business goals, particularly in the realm of Big Data and Cloud. To qualify for this role, you should hold a Bachelor's or Master's degree in Computer Science, Engineering, or a related field, along with 7-10 years of experience in software development. A proven track record of technical leadership in Big Data and Cloud environments is required. Proficiency in technologies like Hadoop, Spark, GCP, AWS, and Azure is essential, with knowledge of Databricks/Snowflake considered an advantage. Strong communication and interpersonal skills are necessary to convey technical concepts to various stakeholders effectively. Upon successful tenure as a Technical Lead, you will have the opportunity to progress into the role of Technical Architect. This advancement will entail additional responsibilities related to system architecture, design, and strategic technical decision-making, with a continued focus on Big Data and Cloud technologies.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

As an ideal candidate for this role, you will be responsible for designing and architecting scalable Big Data solutions within the Hadoop ecosystem. Your key duties will include leading architecture-level discussions for data platforms and analytics systems, constructing and optimizing data pipelines utilizing PySpark and other distributed computing tools, and transforming business requirements into scalable data models and integration workflows. It will be crucial for you to guarantee the high performance and availability of enterprise-grade data processing systems. Additionally, you will play a vital role in mentoring development teams and offering guidance on best practices and performance tuning. Your must-have skills for this position include architect-level experience with the Big Data ecosystem and enterprise data solutions, proficiency in Hadoop, PySpark, and distributed data processing frameworks, as well as hands-on experience in SQL and data warehousing concepts. A deep understanding of data lake architecture, data ingestion, ETL, and orchestration tools, along with experience in performance optimization and large-scale data handling will be essential. Your problem-solving, design, and analytical skills should be excellent. While not mandatory, it would be beneficial if you have exposure to cloud platforms such as AWS, Azure, or GCP for data solutions, and possess knowledge of data governance, data security, and metadata management. Joining our team will provide you with the opportunity to work on cutting-edge Big Data technologies, gain leadership exposure, and be directly involved in architectural decisions. This role offers stability as a full-time position within a top-tier tech team, ensuring a work-life balance with a 5-day working schedule. (ref:hirist.tech),

Posted 2 weeks ago

Apply

10.0 - 14.0 years

0 Lacs

pune, maharashtra

On-site

You will be working as a Lead Data Engineer & Architect at Phonologies, a company that specializes in managing telephony infrastructure for contact center applications and chatbots. Phonologies" platform is utilized by leading pharmacy chains, Fortune 500 companies, and North America's largest carrier to automate voice-based customer support queries, enhancing customer interactions and operational efficiencies. The company is headquartered in India and operates on a global scale. As the Lead Data Engineer & Architect, your role will involve designing and implementing data architecture, creating and maintaining data pipelines, performing data analysis, and collaborating with different teams to enhance data-driven decision-making processes. You will also be tasked with leading data engineering projects, ensuring data quality and security, and leveraging your expertise to drive successful outcomes. To be successful in this role, you should possess at least 10 years of experience in enterprise data engineering and architecture. You must be proficient in ETL processes, orchestration, and streaming pipelines, with a strong skill set in technologies such as Hadoop, Spark, Azure, Kafka, and Kubernetes. Additionally, you should have a track record of building MLOps and AutoML-ready production pipelines and delivering solutions for telecom, banking, and public sector industries. Your ability to lead cross-functional teams with a focus on client satisfaction will be crucial, as well as your certifications in data platforms and AI leadership. If you are passionate about data engineering, architecture, and driving innovation in a dynamic environment, this role at Phonologies may be the perfect opportunity for you. Join our team in Pune and contribute to our mission of revolutionizing customer support through cutting-edge technology solutions.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

maharashtra

On-site

As a Senior Specialist in Software Development (Artificial Intelligence) at Accelya, you will lead the design, development, and implementation of AI and machine learning solutions to tackle complex business challenges. Your expertise in AI algorithms, model development, and software engineering best practices will be crucial in working with cross-functional teams to deliver intelligent systems that optimize business operations and decision-making. Your responsibilities will include designing and developing AI-driven applications and platforms using machine learning, deep learning, and NLP techniques. You will lead the implementation of advanced algorithms for supervised and unsupervised learning, reinforcement learning, and computer vision. Additionally, you will develop scalable AI models, integrate them into software applications, and build APIs and microservices for deployment in cloud environments or on-premise systems. Collaboration with data scientists and data engineers will be essential in gathering, preprocessing, and analyzing large datasets. You will also implement feature engineering techniques to enhance the accuracy and performance of machine learning models. Regular evaluation of AI models using performance metrics and fine-tuning them for optimal accuracy will be part of your role. Furthermore, you will collaborate with business stakeholders to identify AI adoption opportunities, provide technical leadership and mentorship to junior team members, and stay updated with the latest AI trends and research to introduce innovative techniques to the team. Ensuring ethical compliance, security, and continuous improvement of AI systems will also be key aspects of your role. You should hold a Bachelor's degree in Computer Science, Data Science, Artificial Intelligence, or a related field, along with at least 5 years of experience in software development focusing on AI and machine learning. Proficiency in AI frameworks and libraries, programming languages such as Python, R, or Java, and cloud platforms for deploying AI models is required. Familiarity with Agile methodologies, data structures, and databases is essential. Preferred qualifications include a Master's or PhD in Artificial Intelligence or Machine Learning, experience with NLP techniques and computer vision technologies, and certifications in AI/ML or cloud platforms. Accelya is looking for individuals who are passionate about shaping the future of the air transport industry through innovative AI solutions. If you are ready to contribute your expertise and drive continuous improvement in AI systems, this role offers you the opportunity to make a significant impact in the industry.,

Posted 2 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

haryana

On-site

Join GlobalLogic as a valuable member of the team working on a significant software project for a world-class company that provides M2M / IoT 4G/5G modules to industries such as automotive, healthcare, and logistics. Your engagement will involve contributing to the development of end-user modules" firmware, implementing new features, maintaining compatibility with the latest telecommunication and industry standards, and analyzing and estimating customer requirements. Requirements - BA / BS degree in Computer Science, Mathematics, or a related technical field, or equivalent practical experience. - Proficiency in Cloud SQL and Cloud Bigtable. - Experience with Dataflow, BigQuery, Dataproc, Datalab, Dataprep, Pub / Sub, and Genomics. - Familiarity with Google Transfer Appliance, Cloud Storage Transfer Service, and BigQuery Data Transfer. - Knowledge of data processing software (such as Hadoop, Kafka, Spark, Pig, Hive) and data processing algorithms (MapReduce, Flume). - Previous experience working with technical customers. - Proficiency in writing software in languages like Java or Python. - 6-10 years of relevant consulting, industry, or technology experience. - Strong problem-solving and troubleshooting skills. - Excellent communication skills. Job Responsibilities - Hands-on experience working with data warehouses, including technical architectures, infrastructure components, ETL / ELT, and reporting / analytic tools. - Experience in technical consulting. - Proficiency in architecting and developing software or internet-scale Big Data solutions in virtualized environments like Google Cloud Platform (mandatory) and AWS / Azure (good to have). - Familiarity with big data, information retrieval, data mining, machine learning, and building high availability applications with modern web technologies. - Working knowledge of ITIL and / or agile methodologies. - Google Data Engineer certification. What We Offer - Culture of caring: Prioritize a culture of caring, where people come first, fostering an inclusive environment of acceptance and belonging. - Learning and development: Commitment to continuous learning and growth, offering various programs, training curricula, and hands-on opportunities for personal and professional advancement. - Interesting & meaningful work: Engage in impactful projects that allow for creative problem-solving and exploration of new solutions. - Balance and flexibility: Embrace work-life balance with diverse career areas, roles, and work arrangements to support personal well-being. - High-trust organization: Join a high-trust organization with a focus on integrity, trustworthiness, and ethical practices. About GlobalLogic GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner known for collaborating with forward-thinking companies to create innovative digital products and experiences. Join the team in transforming businesses and industries through intelligent products, platforms, and services, contributing to cutting-edge solutions that shape the world today.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

The ideal candidate for the Big Data Engineer role should have 3-6 years of experience and be located in Hyderabad. You should possess strong skills in Spark, Python/Scala, AWS/Azure, Snowflake, Databricks, SQL Server/NoSQL. As a Big Data Engineer, your main responsibilities will include designing and implementing data pipelines for both batch and real-time data processing. You will need to optimize data storage solutions for efficiency and scalability, collaborate with analysts and business teams to meet data requirements, monitor data pipeline performance, and troubleshoot any issues that arise. It is crucial to ensure compliance with data security and privacy policies. The required skills for this role include proficiency in Python, SQL, and ETL frameworks, experience with big data tools such as Spark and Hadoop, a strong knowledge of cloud services and databases, as well as familiarity with data modeling and warehousing concepts.,

Posted 2 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

noida, uttar pradesh

On-site

As a Data Pipeline Architect at our company, you will be responsible for designing, developing, and maintaining optimal data pipeline architecture. You will monitor incidents, perform root cause analysis, and implement appropriate actions to ensure smooth operations. Additionally, you will troubleshoot issues related to abnormal job execution and data corruption, and automate jobs, notifications, and reports for efficiency. Your role will also involve optimizing existing queries, reverse engineering for data research and analysis, and calculating the impact of issues on downstream processes for effective communication. You will support failures, address data quality issues, and ensure the overall health of the environment. Maintaining ingestion and pipeline runbooks, portfolio summaries, and DBAR will be part of your responsibilities. Furthermore, you will enable infrastructure changes, enhancements, and updates roadmap, and build the infrastructure for optimal extraction, transformation, and loading of data from various sources using big data technologies, python, or Web-based APIs. Conducting and participating in code reviews with peers, ensuring effective communication, and understanding requirements will be essential in this role. To qualify for this position, you should hold a Bachelor's degree in Engineering/Computer Science or a related quantitative field. You must have a minimum of 8 years of programming experience with python and SQL, as well as hands-on experience with GCP, BigQuery, Dataflow, Data Warehousing, Apache Beam, and Cloud Storage. Experience with massively parallel processing systems like Spark or Hadoop, source code control systems (GIT), and CI/CD processes is required. Involvement in designing, prototyping, and delivering software solutions within the big data ecosystem, developing generative AI models, and ensuring code quality through reviews are key aspects of this role. Experience with Agile development methodologies, improving data governance and quality, and increasing data reliability are also important. Joining our team at EXL Analytics offers you the opportunity to work in a dynamic and innovative environment alongside experienced professionals. You will gain insights into various business domains, develop teamwork and time-management skills, and receive training in analytics tools and techniques. Our mentoring program and growth opportunities ensure that you have the support and guidance needed to excel in your career. Sky is the limit for our team members, and the experiences gained at EXL Analytics pave the way for personal and professional development within our company and beyond.,

Posted 2 weeks ago

Apply

7.0 - 11.0 years

0 Lacs

pune, maharashtra

On-site

You are an experienced professional with over 7 years of experience in application or production support looking to join our Production/Application support team. You possess a blend of strong technical skills in Unix, SQL, and Big Data technologies along with domain expertise in financial services such as securities, secured financing, rates, liquidity reporting, derivatives, front office/back-office systems, and trading lifecycle. Your key responsibilities will include providing L2 production support for mission-critical liquidity reporting and financial applications, ensuring high availability and performance. You will be monitoring and resolving incidents related to trade capture, batch failures, position keeping, market data, pricing, risk, and liquidity reporting. Additionally, you will proactively manage alerts, logs, and jobs using Autosys, Unix tools, and monitoring platforms like ITRS/AWP. In this role, you will be executing advanced SQL queries and scripts for data analysis, validation, and issue resolution. You will also be supporting multiple applications built on stored procedures, SSIS, SSRS, Big Data ecosystems (Hive, Spark, Hadoop), and troubleshooting data pipeline issues. It will be your responsibility to maintain and improve knowledge bases, SOPs, and runbooks for production support while actively participating in change management and release activities, including deployment validations. You will take the lead in root cause analysis (RCA), conduct post-incident reviews, and drive permanent resolutions. Collaboration with infrastructure teams on capacity, performance, and system resilience initiatives will be crucial. Your contribution to continuous service improvement, stability management, and automation initiatives will be highly valued. To qualify for this role, you should have a Bachelor's or Master's degree in Computer Science, Information Technology, Engineering, or a related field. A minimum of 7 years of experience in application or production support, with at least 2 years at an advanced level, is required. Your hands-on experience with Unix/Linux scripting, file manipulation, job control, SQL (MSSQL/Oracle or similar), stored procedures, SSIS, SSRS, Big Data technologies (Hadoop, Hive, Spark), job schedulers like Autosys, and log analysis tools will be essential. Solid understanding of financial instruments and trade lifecycle, knowledge of front office/back office and reporting workflows and operations, excellent analytical and problem-solving skills, effective communication, stakeholder management skills, and experience with ITIL processes are also key requirements for this role. If you meet these qualifications and are looking to join a dynamic team, we encourage you to apply.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

You are currently seeking a Software Development Engineer-II for Location Program within the Data & Services group. As a Sr. Software Development Engineer (Big Data Engineer), you will be responsible for owning end-to-end delivery of engineering projects for analytics and BI solutions that leverage Mastercard dataset combined with proprietary analytics techniques. Your role will involve helping businesses worldwide solve multi-million dollar business problems. Your responsibilities will include working as a member of a support team to resolve product-related issues, demonstrating good troubleshooting skills and knowledge in support work. You should independently apply problem-solving skills to identify symptoms and root causes of issues, making effective decisions even when data is ambiguous. Providing technical guidance, support, and mentoring to junior team members will be crucial, along with actively contributing to improvement decisions and making technology recommendations that balance business needs and technical requirements. It will be essential for you to proactively understand stakeholder needs, goals, expectations, and viewpoints to deliver results effectively. You must ensure that design thinking accounts for the long-term maintainability of code. Thriving in a highly collaborative company environment where agility is paramount is expected, along with staying up to date with the latest technologies and technical advancements through self-study, blogs, meetups, conferences, etc. System maintenance, production incident problem management, identification of root cause, and issue remediation will also fall under your responsibilities. To excel in this role, you should have a Bachelor's degree in Information Technology, Computer Science, or Engineering, or equivalent work experience. A proven track record of successfully delivering complex technical assignments is required. You should possess a solid foundation in Computer Science fundamentals, web applications, and microservices-based software architecture, along with full-stack development experience, including working with databases like Oracle, Netezza, SQL Server, and hands-on experience with technologies such as Hadoop, Python, Impala, etc. Excellent SQL skills are essential, with experience working with large and complex data sources and the capability of comprehending and writing complex queries. Experience working in Agile teams and familiarity with Agile/SAFe tenets and ceremonies is necessary. Strong analytical and problem-solving abilities, along with quick adaptation to new technologies, methodologies, and systems, are crucial. Excellent English communication skills, both written and verbal, are required to effectively interact with multiple technical teams and stakeholders. To succeed in this role, you should be high-energy, detail-oriented, and proactive, with the ability to function under pressure in an independent environment. You should possess a high degree of initiative and self-motivation to drive results effectively.,

Posted 2 weeks ago

Apply

6.0 - 11.0 years

15 - 22 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Primary skills: Pyspark/Hadoop/Scala NP : immediate to 60 Days

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description We have an opportunity to impact your career and provide an adventure where you can push the limits of what's possible. As a Senior Manager of Software Engineering at JPMorgan Chase within the Consumer and Community Banking – Data Technology team, you lead a technical area and drive impact within teams, technologies, and projects across departments. Utilize your in-depth knowledge of software, applications, technical processes, and product management to drive multiple complex projects and initiatives, while serving as a primary decision maker for your teams and be a driver of innovation and solution delivery. Job Responsibilities Leads Data publishing and processing platform engineering team to achieve business & technology objectives Accountable for technical tools evaluation, build platforms, design & delivery outcomes Carries governance accountability for coding decisions, control obligations, and measures of success such as cost of ownership, maintainability, and portfolio operations Delivers technical solutions that can be leveraged across multiple businesses and domains Influences peer leaders and senior stakeholders across the business, product, and technology teams Champions the firm’s culture of diversity, equity, inclusion, and respect Required Qualifications, Capabilities, And Skills Formal training or certification on software engineering concepts and 5+ years applied experience. In addition, 2 + years of experience leading technologists to manage and solve complex technical items within your domain of expertise Expertise in programming languages such as Python and Java, with a strong understanding of cloud services including AWS, EKS, SNS, SQS, Cloud Formation, Terraform, and Lambda. Proficient in messaging services like Kafka and big data technologies such as Hadoop, Spark-SQL, and Pyspark. Experienced with Teradata or Snowflake, or any other RDBMS databases, with a solid understanding of Teradata or Snowflake. Advanced experience in leading technologists to manage, anticipate, and solve complex technical challenges, along with experience in developing and recognizing talent within cross-functional teams. Experience in leading a product as a Product Owner or Product Manager, with practical cloud-native experience. Preferred Qualifications, Capabilities, And Skills Previous experience leading / building Platforms & Frameworks teams Skilled in orchestration tools like Airflow (preferable) or Control-M, and experienced in continuous integration and continuous deployment (CICD) using Jenkins. Experience with Observability tools, frameworks and platforms. Experience with large scalable secure distributed complex architecture and design Experience with nonfunctional topics like security, performance, code and design best practices AWS Certified Solutions Architect, AWS Certified Developer, or similar certification is a big plus. ABOUT US

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies