Home
Jobs
Companies
Resume

59 Aws Emr Jobs

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 7.0 years

7 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

As a senior SAP Consultant, you will serve as a client-facing practitioner working collaboratively with clients to deliver high-quality solutions and be a trusted business advisor with deep understanding of SAP Accelerate delivery methodology or equivalent and associated work products. You will work on projects that assist clients in integrating strategy, process, technology, and information to enhance effectiveness, reduce costs, and improve profit and shareholder value. There are opportunities for you to acquire new skills, work across different disciplines, take on new challenges, and develop a comprehensive understanding of various industries. Your primary responsibilities include: Strategic SAP Solution FocusWorking across technical design, development, and implementation of SAP solutions for simplicity, amplification, and maintainability that meet client needs. Comprehensive Solution DeliveryInvolvement in strategy development and solution implementation, leveraging your knowledge of SAP and working with the latest technologies. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 5 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on AWS; Exposure to streaming solutions and message brokers like Kafka technologies. Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers

Posted 21 hours ago

Apply

4.0 - 9.0 years

12 - 16 Lacs

Kochi

Work from Office

Naukri logo

As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies. Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers.

Posted 1 week ago

Apply

5.0 - 7.0 years

12 - 16 Lacs

Bengaluru

Work from Office

Naukri logo

As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 5 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on AWS; Exposure to streaming solutions and message brokers like Kafka technologies. Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers AWS S3 , Redshift , and EMR for data storage and distributed processing. AWS Lambda , AWS Step Functions , and AWS Glue to build serverless, event-driven data workflows and orchestrate ETL processes

Posted 1 week ago

Apply

4.0 - 9.0 years

12 - 22 Lacs

Gurugram

Work from Office

Naukri logo

To Apply - Submit Details via Google Form - https://forms.gle/8SUxUV2cikzjvKzD9 As a Senior Consultant in our Consulting team, youll build and nurture positive working relationships with teams and clients with the intention to exceed client expectations Seeking experienced AWS Data Engineers to design, implement, and maintain robust data pipelines and analytics solutions using AWS services. The ideal candidate will have a strong background in AWS data services, big data technologies, and programming languages. Role & responsibilities 1. Design and implement scalable, high-performance data pipelines using AWS services 2. Develop and optimize ETL processes using AWS Glue, EMR, and Lambda 3. Build and maintain data lakes using S3 and Delta Lake 4. Create and manage analytics solutions using Amazon Athena and Redshift 5. Design and implement database solutions using Aurora, RDS, and DynamoDB 6. Develop serverless workflows using AWS Step Functions 7. Write efficient and maintainable code using Python/PySpark, and SQL/PostgrSQL 8. Ensure data quality, security, and compliance with industry standards 9. Collaborate with data scientists and analysts to support their data needs 10. Optimize data architecture for performance and cost-efficiency 11. Troubleshoot and resolve data pipeline and infrastructure issues Preferred candidate profile 1. Bachelors degree in computer science, Information Technology, or related field 2. Relevant years of experience as a Data Engineer, with at least 60% of experience focusing on AWS 3. Strong proficiency in AWS data services: Glue, EMR, Lambda, Athena, Redshift, S3 4. Experience with data lake technologies, particularly Delta Lake 5. Expertise in database systems: Aurora, RDS, DynamoDB, PostgreSQL 6. Proficiency in Python and PySpark programming 7. Strong SQL skills and experience with PostgreSQL 8. Experience with AWS Step Functions for workflow orchestration Technical Skills: - AWS Services: Glue, EMR, Lambda, Athena, Redshift, S3, Aurora, RDS, DynamoDB , Step Functions - Big Data: Hadoop, Spark, Delta Lake - Programming: Python, PySpark - Databases: SQL, PostgreSQL, NoSQL - Data Warehousing and Analytics - ETL/ELT processes - Data Lake architectures - Version control: Git - Agile methodologies

Posted 1 week ago

Apply

4.0 - 9.0 years

12 - 22 Lacs

Gurugram, Bengaluru

Work from Office

Naukri logo

To Apply - Submit Details via Google Form - https://forms.gle/8SUxUV2cikzjvKzD9 As a Senior Consultant in our Consulting team, youll build and nurture positive working relationships with teams and clients with the intention to exceed client expectations Seeking experienced AWS Data Engineers to design, implement, and maintain robust data pipelines and analytics solutions using AWS services. The ideal candidate will have a strong background in AWS data services, big data technologies, and programming languages. Role & responsibilities 1. Design and implement scalable, high-performance data pipelines using AWS services 2. Develop and optimize ETL processes using AWS Glue, EMR, and Lambda 3. Build and maintain data lakes using S3 and Delta Lake 4. Create and manage analytics solutions using Amazon Athena and Redshift 5. Design and implement database solutions using Aurora, RDS, and DynamoDB 6. Develop serverless workflows using AWS Step Functions 7. Write efficient and maintainable code using Python/PySpark, and SQL/PostgrSQL 8. Ensure data quality, security, and compliance with industry standards 9. Collaborate with data scientists and analysts to support their data needs 10. Optimize data architecture for performance and cost-efficiency 11. Troubleshoot and resolve data pipeline and infrastructure issues Preferred candidate profile 1. Bachelors degree in computer science, Information Technology, or related field 2. Relevant years of experience as a Data Engineer, with at least 60% of experience focusing on AWS 3. Strong proficiency in AWS data services: Glue, EMR, Lambda, Athena, Redshift, S3 4. Experience with data lake technologies, particularly Delta Lake 5. Expertise in database systems: Aurora, RDS, DynamoDB, PostgreSQL 6. Proficiency in Python and PySpark programming 7. Strong SQL skills and experience with PostgreSQL 8. Experience with AWS Step Functions for workflow orchestration Technical Skills: - AWS Services: Glue, EMR, Lambda, Athena, Redshift, S3, Aurora, RDS, DynamoDB , Step Functions - Big Data: Hadoop, Spark, Delta Lake - Programming: Python, PySpark - Databases: SQL, PostgreSQL, NoSQL - Data Warehousing and Analytics - ETL/ELT processes - Data Lake architectures - Version control: Git - Agile methodologies

Posted 1 week ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Naukri logo

EPAM has presence across 40+ countries globally with 55,000 + professionals & numerous delivery centers, Key locations are North America, Eastern Europe, Central Europe, Western Europe, APAC, Mid East & Development Centers in India (Hyderabad, Pune & Bangalore). Location: Gurgaon/Pune/Hyderabad/Bengaluru/Chennai Work Mode: Hybrid (2-3 days office in a week) Job Description: 5-14 Years of in Big Data & Data related technology experience Expert level understanding of distributed computing principles Expert level knowledge and experience in Apache Spark Hands on programming with Python Proficiency with Hadoop v2, Map Reduce, HDFS, Sqoop Experience with building stream-processing systems, using technologies such as Apache Storm or Spark-Streaming Good understanding of Big Data querying tools, such as Hive, and Impala Experience with integration of data from multiple data sources such as RDBMS (SQL Server, Oracle), ERP, Files Good understanding of SQL queries, joins, stored procedures, relational schemas Experience with NoSQL databases, such as HBase, Cassandra, MongoDB Knowledge of ETL techniques and frameworks Performance tuning of Spark Jobs Experience with native Cloud data services AWS Ability to lead a team efficiently Experience with designing and implementing Big data solutions Practitioner of AGILE methodology WE OFFER Opportunity to work on technical challenges that may impact across geographies Vast opportunities for self-development: online university, knowledge sharing opportunities globally, learning opportunities through external certifications Opportunity to share your ideas on international platforms Sponsored Tech Talks & Hackathons Possibility to relocate to any EPAM office for short and long-term projects Focused individual development Benefit package: • Health benefits, Medical Benefits• Retirement benefits• Paid time off• Flexible benefits Forums to explore beyond work passion (CSR, photography, painting, sports, etc

Posted 1 week ago

Apply

7.0 - 11.0 years

10 - 14 Lacs

Chennai

Work from Office

Naukri logo

What you'll do As a Software Engineer, you will work with a world class team developing and deploying new technologies on a cutting edge network. You will design, develop and deploy new and innovative technology into a service provider network. Viasats unique position as a service provider and equipment manufacturer allows you to experience the whole life cycle of software development all the way from design to deployment. The day-to-day You will be a member of the software team that is involved in the embedded software development . It interacts with different network elements both on Access Network towards adapting with L2 Subsystem, CSN Network towards adapting with service network components. Our team members enjoy working closely with each other utilizing an agile development methodology. Priorities can change quickly, but our team members are able to stay ahead of deadlines to delight every one of our customers whether they are internal or external to Via sat. We are searching for candidates who enjoy working with people and have a technical mind that excels when being challenged What you'll need 7 to 11 years of software engineering experience in Java with strong emphasis on software architecture and design in the Unix/Linux based platforms. Experience with network programming and concurrent/multithreaded programming. Experience building CI/CD pipeline and automated software deployments. Experience working in cloud environment AWS EMR. Familiarity with Hadoop and data processing technologies such as Kafka is advantageous. Problem-solving experience and possess a DevOps approach Strong oral and written communication skills. Bachelors degree in Computer Science, Electrical Engineering, or related Engineering Disciplines. Up to 10% of travel. What will help you on the job Knowledge on tools like Jenkins, JIRA, and Git. Experience with bash, ansible and Python scripting in Linux Experience with telecom/networking/satellite/wireless communications.

Posted 1 week ago

Apply

4.0 - 9.0 years

6 - 10 Lacs

Hyderabad

Work from Office

Naukri logo

We are seeking a detail-oriented and highly skilled Data Engineering Test Automation Engineer to ensure the quality, reliability, and performance of our data pipelines and platforms. The ideal candidate will have a strong background in data testing , ETL validation , and test automation frameworks . You will work closely with data engineers, analysts, and DevOps teams to build robust test suites for large-scale data solutions. This role combines deep technical execution with a solid foundation in QA best practices including test planning, defect tracking, and test lifecycle management . You will be responsible for designing and executing manual and automated test strategies for complex real-time and batch data pipelines , contributing to the design of automation frameworks , and ensuring high-quality data delivery across our AWS and Databricks-based analytics platforms . The role is highly technical and hands-on , with a strong focus on automation, data accuracy, completeness, consistency , and ensuring data governance practices are seamlessly integrated into development pipelines. Roles & Responsibilities Design, develop, and maintain automated test scripts for data pipelines, ETL jobs, and data integrations. Validate data accuracy, completeness, transformations, and integrity across multiple systems. Collaborate with data engineers to define test cases and establish data quality metrics. Develop reusable test automation frameworks and CI/CD integrations (e.g., Jenkins, GitHub Actions). Perform performance and load testing for data systems. Maintain test data management and data mocking strategies. Identify and track data quality issues, ensuring timely resolution. Perform root cause analysis and drive corrective actions. Contribute to QA ceremonies (standups, planning, retrospectives) and drive continuous improvement in QA processes and culture. Must-Have Skills Experience in QA roles, with strong exposure to data pipeline validation and ETL Testing. Domain Knowledge of R&D domain of life science. Validate data accuracy, transformations, schema compliance, and completeness across systems using PySpark and SQL . Strong hands-on experience with Python, and optionally PySpark, for developing automated data validation scripts. Proven experience in validating ETL workflows, with a solid understanding of data transformation logic, schema comparison, and source-to-target mapping. Experience working with data integration and processing platforms like Databricks/Snowflake, AWS EMR, Redshift etc Experience in manual and automated testing of data pipelines executions for both batch and real-time data pipelines. Perform performance testing of large-scale complex data engineering pipelines. Ability to troubleshoot data issues independently and collaborate with engineering teams for root cause analysis Strong understanding of QA methodologies, test planning, test case design, and defect lifecycle management. Hands-on experience with API testing using Postman, pytest, or custom automation scripts Experience integrating automated tests into CI/CD pipelines using tools like Jenkins, GitHub Actions, or similar. Knowledge of cloud platforms such as AWS, Azure, GCP. Good-to-Have Skills Certifications in Databricks, AWS, Azure, or data QA (e.g., ISTQB). Understanding of data privacy, compliance, and governance frameworks. Knowledge of UI automated testing frameworks like Selenium, JUnit, TestNG Familiarity with monitoring/observability tools such as Datadog, Prometheus, or Cloud Watch Education and Professional Certifications Masters degree and 3 to 7 years of Computer Science, IT or related field experience OR Bachelors degree and 4 to 9 years of Computer Science, IT or related field experience Soft Skills Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills.

Posted 1 week ago

Apply

4.0 - 9.0 years

12 - 22 Lacs

Hyderabad, Chennai

Work from Office

Naukri logo

Interested can also apply with Sanjeevan Natarajan sanjeevan.natarajan@careernet.in Role & responsibilities Technical Leadership Lead a team of data engineers and developers; define technical strategy, best practices, and architecture for data platforms. End-to-End Solution Ownership Architect, develop, and manage scalable, secure, and high-performing data solutions on AWS and Databricks. Data Pipeline Strategy Oversee the design and development of robust data pipelines for ingestion, transformation, and storage of large-scale datasets. Data Governance & Quality Enforce data validation, lineage, and quality checks across the data lifecycle. Define standards for metadata, cataloging, and governance. Orchestration & Automation Design automated workflows using Airflow, Databricks Jobs/APIs, and other orchestration tools for end-to-end data operations. Cloud Cost & Performance Optimization Implement performance tuning strategies, cost optimization best practices, and efficient cluster configurations on AWS/Databricks. Security & Compliance Define and enforce data security standards, IAM policies, and compliance with industry-specific regulatory frameworks. Collaboration & Stakeholder Engagement Work closely with business users, analysts, and data scientists to translate requirements into scalable technical solutions. Migration Leadership Drive strategic data migrations from on-prem/legacy systems to cloud-native platforms with minimal risk and downtime. Mentorship & Growth Mentor junior engineers, contribute to talent development, and ensure continuous learning within the team. Preferred candidate profile Python , SQL , PySpark , Databricks , AWS (Mandatory) Leadership Experience in Data Engineering/Architecture Added Advantage: Experience in Life Sciences / Pharma

Posted 1 week ago

Apply

8.0 - 12.0 years

12 - 17 Lacs

Chennai

Work from Office

Naukri logo

Strong working experience in Python programming. Expertise with one of the Python frameworks - pyspark, Strong experience with using pandas, numpy, joblib and other popular libraries. Must have experience With AWS EMR and Pyspark Good working experience with parallel batch processing with python Good working experience On AWS Batch and Step functions Should have the expertise to write an effective, scalable, highly performable code Good to have Apache Airflow Should have implemented 2 or more large-scale projects and be a part of end-to-end system implementation. Good analytical and problem-solving skills

Posted 1 week ago

Apply

4.0 - 9.0 years

15 - 25 Lacs

Hyderabad, Chennai

Work from Office

Naukri logo

Interested can also apply with sanjeevan.natarajan@careernet.in Role & responsibilities Technical Leadership Lead a team of data engineers and developers; define technical strategy, best practices, and architecture for data platforms. End-to-End Solution Ownership Architect, develop, and manage scalable, secure, and high-performing data solutions on AWS and Databricks. Data Pipeline Strategy Oversee the design and development of robust data pipelines for ingestion, transformation, and storage of large-scale datasets. Data Governance & Quality Enforce data validation, lineage, and quality checks across the data lifecycle. Define standards for metadata, cataloging, and governance. Orchestration & Automation Design automated workflows using Airflow, Databricks Jobs/APIs, and other orchestration tools for end-to-end data operations. Cloud Cost & Performance Optimization Implement performance tuning strategies, cost optimization best practices, and efficient cluster configurations on AWS/Databricks. Security & Compliance Define and enforce data security standards, IAM policies, and compliance with industry-specific regulatory frameworks. Collaboration & Stakeholder Engagement Work closely with business users, analysts, and data scientists to translate requirements into scalable technical solutions. Migration Leadership Drive strategic data migrations from on-prem/legacy systems to cloud-native platforms with minimal risk and downtime. Mentorship & Growth Mentor junior engineers, contribute to talent development, and ensure continuous learning within the team. Preferred candidate profile Python , SQL , PySpark , Databricks , AWS (Mandatory) Leadership Experience in Data Engineering/Architecture Added Advantage: Experience in Life Sciences / Pharma

Posted 1 week ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Naukri logo

EPAM has presence across 40+ countries globally with 55,000 + professionals & numerous delivery centers, Key locations are North America, Eastern Europe, Central Europe, Western Europe, APAC, Mid East & Development Centers in India (Hyderabad, Pune & Bangalore). Location: Gurgaon/Pune/Hyderabad/Bengaluru/Chennai Work Mode: Hybrid (2-3 days office in a week) Job Description: 5-14 Years of in Big Data & Data related technology experience Expert level understanding of distributed computing principles Expert level knowledge and experience in Apache Spark Hands on programming with Python Proficiency with Hadoop v2, Map Reduce, HDFS, Sqoop Experience with building stream-processing systems, using technologies such as Apache Storm or Spark-Streaming Good understanding of Big Data querying tools, such as Hive, and Impala Experience with integration of data from multiple data sources such as RDBMS (SQL Server, Oracle), ERP, Files Good understanding of SQL queries, joins, stored procedures, relational schemas Experience with NoSQL databases, such as HBase, Cassandra, MongoDB Knowledge of ETL techniques and frameworks Performance tuning of Spark Jobs Experience with native Cloud data services AWS/Azure/GCP Ability to lead a team efficiently Experience with designing and implementing Big data solutions Practitioner of AGILE methodology WE OFFER Opportunity to work on technical challenges that may impact across geographies Vast opportunities for self-development: online university, knowledge sharing opportunities globally, learning opportunities through external certifications Opportunity to share your ideas on international platforms Sponsored Tech Talks & Hackathons Possibility to relocate to any EPAM office for short and long-term projects Focused individual development Benefit package: • Health benefits, Medical Benefits• Retirement benefits• Paid time off• Flexible benefits Forums to explore beyond work passion (CSR, photography, painting, sports, etc

Posted 2 weeks ago

Apply

5.0 - 7.0 years

0 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Foundit logo

Introduction In this role, youll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. In this role, youll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your role and responsibilities As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelors Degree Preferred education Masters Degree Required technical and professional expertise Total 5 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on AWS Exposure to streaming solutions and message brokers like Kafka technologies. Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers AWS S3, Redshift, and EMR for data storage and distributed processing. AWS Lambda, AWS Step Functions, and AWS Glue to build serverless, event-driven data workflows and orchestrate ETL processes

Posted 2 weeks ago

Apply

7.0 - 12.0 years

18 - 20 Lacs

Hyderabad

Work from Office

Naukri logo

We are Hiring Senior Python with Machine Learning Engineer Level 3 for a US based IT Company based in Hyderabad. Candidates with minimum 7 Years of experience in python and machine learning can apply. Job Title : Senior Python with Machine Learning Engineer Level 3 Location : Hyderabad Experience : 7+ Years CTC : 28 LPA - 30 LPA Working shift : Day shift Job Description: We are seeking a highly skilled and experienced Python Developer with a strong background in Machine Learning (ML) to join our advanced analytics team. In this Level 3 role, you will be responsible for designing, building, and deploying robust ML pipelines and solutions across real-time, batch, event-driven, and edge computing environments. The ideal candidate will have extensive hands-on experience in developing and deploying ML workflows using AWS SageMaker , building scalable APIs, and integrating ML models into production systems. This role also requires a strong grasp of the complete ML lifecycle and DevOps practices specific to ML projects. Key Responsibilities: Develop and deploy end-to-end ML pipelines for real-time, batch, event-triggered, and edge environments using Python Utilize AWS SageMaker to build, train, deploy, and monitor ML models using SageMaker Pipelines, MLflow, and Feature Store Build and maintain RESTful APIs for ML model serving using FastAPI , Flask , or Django Work with popular ML frameworks and tools such as scikit-learn , PyTorch , XGBoost , LightGBM , and MLflow Ensure best practices across the ML lifecycle: data preprocessing, model training, validation, deployment, and monitoring Implement CI/CD pipelines tailored for ML workflows using tools like Bitbucket , Jenkins , Nexus , and AUTOSYS Design and maintain ETL workflows for ML pipelines using PySpark , Kafka , AWS EMR , and serverless architectures Collaborate with cross-functional teams to align ML solutions with business objectives and deliver impactful results Required Skills & Experience: 5+ years of hands-on experience with Python for scripting and ML workflow development 4+ years of experience with AWS SageMaker for deploying ML models and pipelines 3+ years of API development experience using FastAPI , Flask , or Django 3+ years of experience with ML tools such as scikit-learn , PyTorch , XGBoost , LightGBM , and MLflow Strong understanding of the complete ML lifecycle: from model development to production monitoring Experience implementing CI/CD for ML using Bitbucket , Jenkins , Nexus , and AUTOSYS Proficient in building ETL processes for ML workflows using PySpark , Kafka , and AWS EMR Nice to Have: Experience with H2O.ai for advanced machine learning capabilities Familiarity with containerization using Docker and orchestration using Kubernetes For further assistance contact/whatsapp : 9354909517 or write to hema@gist.org.in

Posted 2 weeks ago

Apply

3.0 - 6.0 years

0 Lacs

, India

On-site

Foundit logo

About the Role: 10 One of the most valuable asset in today's Financial industry is the data which can provide businesses the intelligence essential to making business and financial decisions with conviction. This role will provide an opportunity to you to work on Ratings and Research related data. You will get an opportunity to work on cutting edge big data technologies and will be responsible for development of both Data feeds as well as API work. The Team: RatingsXpress is at the heart of financial workflows when it comes to providing and analyzing data. We provide Ratings and Research information to clients . Our work deals with content ingestion, data feeds generation as well as exposing the data to clients via API calls. This position in part of the Ratings Xpresss team and is focused on providing clients the critical data they need to make the most informed investment decisions possible. Impact: As a member of the Xpressfeed Team in S&P Global Market Intelligence, you will work with a group of intelligent and visionary engineers to build impactful content management tools for investment professionals across the globe. Our Software Engineers are involved in the full product life cycle, from design through release. You will be expected to participate in application designs , write high-quality code and innovate on how to improve the overall system performance and customer experience. If you are a talented developer and want to help drive the next phase for Data Management Solutions at S&P Global and can contribute great ideas, solutions and code and understand the value of Cloud solutions, we would like to talk to you. What's in it for you: We are currently seeking a Software Developer with a passion for full-stack development. In this role, you will have the opportunity to work on cutting-edge cloud technologies such as Databricks , Snowflake , and AWS , while also engaging in Scala and SQL Server -based database development. This position offers a unique opportunity to grow both as a Full Stack Developer and as a Cloud Engineer , expanding your expertise across modern data platforms and backend development. Responsibilities: Analyze, design and develop solutions within a multi-functional Agile team to support key business needs for the Data feeds Design, implement and test solutions using AWS EMR for content Ingestion. Work on complex SQL server projects involving high volume data Engineer components, and common services based on standard corporate development models, languages and tools Apply software engineering best practices while also leveraging automation across all elements of solution delivery Collaborate effectively with technical and non-technical stakeholders. Must be able to document and demonstrate technical solutions by developing documentation, diagrams, code comments, etc. Basic Qualifications: Bachelor's degree in Computer Science, Information Technology, Engineering, or a related field. 3-6 years of experience in application development. Minimum of 2 years of hands-on experience with Scala. Minimum of 2 years of hands-on experience with Microsoft SQL Server. Solid understanding of Amazon Web Services (AWS) and cloud-based development. In-depth knowledge of system architecture, object-oriented programming, and design patterns. Excellent communication skills, with the ability to convey complex ideas clearly both verbally and in writing. Preferred Qualifications: Familiarity with AWS Services, EMR, Auto scaling, EKS Working knowledge of snowflake. Preferred experience in Python development. Familiarity with the Financial Services domain and Capital Markets is a plus. Experience developing systems that handle large volumes of data and require high computational performance. What's In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology-the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide-so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We're committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We're constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That's why we provide everything you-and your career-need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It's not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards-small perks can make a big difference. For more information on benefits by country visit: Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - ----------------------------------------------------------- 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group), SWP Priority - Ratings - (Strategic Workforce Planning)

Posted 2 weeks ago

Apply

0.0 years

0 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Foundit logo

Job Description: Job description Skills AWS EMR Key Responsibilities: A day in the life of an Infoscion As part of the Infosys delivery team your primary role would be to interface with the client for quality assurance issue resolution and ensuring high customer satisfaction You will understand requirements create and review designs validate the architecture and ensure high levels of service offerings to clients in the technology domain You will participate in project estimation provide inputs for solution delivery conduct technical risk planning perform code reviews and unit test plan reviews You will lead and guide your teams towards developing optimized high quality code deliverables continual knowledge management and adherence to the organizational guidelines and processes You would be a key contributor to building efficient programs systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey this is the place for you If you think you fit right in to help our clients navigate their next in their digital transformation journey this is the place for you Technical Requirements: Primary skills Technology Big Data Data Processing Map Reduce Preferred Skills: Technology->Big Data - Data Processing->Map Reduce

Posted 3 weeks ago

Apply

6.0 - 11.0 years

4 - 8 Lacs

Kolkata

Work from Office

Naukri logo

Must have knowledge in Azure Datalake, Azure function, Azure Databricks, Azure data factory, PostgreSQL Working knowledge in Azure devops, Git flow would be an added advantage. (OR) SET 2 Must have working knowledge in AWS Kinesis, AWS EMR, AWS Glue, AWS RDS, AWS Athena, AWS RedShift. Should have demonstrable knowledge and expertise in working with timeseries data. Working knowledge in delivering data engineering / data science projects in Industry 4.0 is an added advantage. Should have knowledge on Palantir. Strong problem-solving skills with an emphasis on sustainable and reusable development. Experience using statistical computer languages to manipulate data and draw insights from large data sets Python/PySpark, Pandas, Numpy seaborn / matplotlib, Knowledge in Streamlit.io is a plus Familiarity with Scala, GoLang, Java would be added advantage. Experience with big data toolsHadoop, Spark, Kafka, etc. Experience with relational databases such as Microsoft SQL Server, MySQL, PostGreSQL, Oracle and NoSQL databases such as Hadoop, Cassandra, Mongo dB Experience with data pipeline and workflow management toolsAzkaban, Luigi, Airflow, etc Experience building and optimizing big data data pipelines, architectures and data sets. Strong analytic skills related to working with unstructured datasets. Primary Skills Provide innovative solutions to the data engineering problems that are faced in the project and solve them with technically superior code & skills. Where possible, should document the process of choosing technology or usage of integration patterns and help in creating a knowledge management artefact that can be used for other similar areas. Create & apply best practices in delivering the project with clean code. Should work innovatively and have a sense of proactiveness in fulfilling the project needs. Additional Information: Reporting to Director- Intelligent Insights and Data Strategy Travel Must be willing to be deployed at client locations anywhere in the world for long and short term as well as should be flexible to travel on shorter duration within India and abroad

Posted 3 weeks ago

Apply

8.0 - 11.0 years

35 - 37 Lacs

Kolkata, Ahmedabad, Bengaluru

Work from Office

Naukri logo

Dear Candidate, Seeking a Cloud Monitoring Specialist to set up observability and real-time monitoring in cloud environments. Key Responsibilities: Configure logging and metrics collection. Set up alerts and dashboards using Grafana, Prometheus, etc. Optimize system visibility for performance and security. Required Skills & Qualifications: Familiar with ELK stack, Datadog, New Relic, or Cloud-native monitoring tools. Strong troubleshooting and root cause analysis skills. Knowledge of distributed systems. Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies

Posted 3 weeks ago

Apply

5.0 - 8.0 years

8 - 18 Lacs

Bengaluru

Hybrid

Naukri logo

Technical Skills: Python, Py Spark, Sql, Redshift , S3 , Cloud Watch, Lambda, AWS Glue EMR Step Function Databricks Having knowledge on visulalization tool will add value Experience : Should have worked in technical delivery of above services preferable in similar organizations and having good communication skills. Certifications Preference of AWS Data Engineer Certification

Posted 3 weeks ago

Apply

5.0 - 10.0 years

3 - 7 Lacs

Bengaluru

Work from Office

Naukri logo

Job Title:EMR_Spark SME Experience:5-10 Years Location:Bangalore : Technical Skills: 5+ years of experience in big data technologies with hands-on expertise in AWS EMR and Apache Spark. Proficiency in Spark Core, Spark SQL, and Spark Streaming for large-scale data processing. Strong experience with data formats (Parquet, Avro, JSON) and data storage solutions (Amazon S3, HDFS). Solid understanding of distributed systems architecture and cluster resource management (YARN). Familiarity with AWS services (S3, IAM, Lambda, Glue, Redshift, Athena). Experience in scripting and programming languages such as Python, Scala, and Java. Knowledge of containerization and orchestration (Docker, Kubernetes) is a plus. Architect and develop scalable data processing solutions using AWS EMR and Apache Spark. Optimize and tune Spark jobs for performance and cost efficiency on EMR clusters. Monitor, troubleshoot, and resolve issues related to EMR and Spark workloads. Implement best practices for cluster management, data partitioning, and job execution. Collaborate with data engineering and analytics teams to integrate Spark solutions with broader data ecosystems (S3, RDS, Redshift, Glue, etc.). Automate deployments and cluster management using infrastructure-as-code tools like CloudFormation, Terraform, and CI/CD pipelines. Ensure data security and governance in EMR and Spark environments in compliance with company policies. Provide technical leadership and mentorship to junior engineers and data analysts. Stay current with new AWS EMR features and Spark versions to recommend improvements and upgrades. Requirements and Skills Performance tuning and optimization of Spark jobs. Problem-solving skills with the ability to diagnose and resolve complex technical issues. Strong experience with version control systems (Git) and CI/CD pipelines. Excellent communication skills to explain technical concepts to both technical and non-technical audiences. Qualification: Education qualificationB.Tech, BE, BCA, MCA, M. Tech or equivalent technical degree from a reputed college. Certifications: AWS Certified Solutions Architect – Associate/Professional AWS Certified Data Analytics – Specialty

Posted 3 weeks ago

Apply

5.0 - 10.0 years

5 - 10 Lacs

Gurgaon / Gurugram, Haryana, India

On-site

Foundit logo

Technology Leadership Independently he/she should be able to design, implement, and deliver complex Data Warehousing/Data Lake, Cloud Data Management, and Data Integration project assignments. Technical Design and Development Expertise Any of the ETL tools (Informatica, IICS, Matillion, Data Stage), and hosting technologies like the AWS stack (Redshift, EC2) is mandatory. Any of the BI tools among Tableau, Qlik, Power BI, and MSTR. Informatica MDM, Customer Data Management . Expert knowledge of SQL with the capability to performance tune complex SQL queries in tradition and distributed RDBMS systems is a must. Experience across Python, PySpark, and Unix/Linux Shell Scripting . Project Management Is a must to have. Should be able to create simple to complex project plans in Microsoft Project Plan and think in advance about potential risks and mitigation plans as per project plan. Task Management Should be able to onboard the team on the project plan and delegate tasks to accomplish milestones as per plan. Should be comfortable in discussing and prioritizing work items with team members in an onshore-offshore model. Client Relationship Manage client communication and client expectations independently or with support of reporting manager. Should be able to deliver results back to the Client as per plan. Education Bachelor Equivalent - Other PG Diploma in Management Work Experience We are hiring for the following roles across Data management tech stacks: ETL-Snowflake/AWS/IICS: 5-8 years of experience in ETL tools - IICS, Redshift, Snowflake. Strong experience in AWS/Snowflake technologies - Redshift / Synapse/ Snowflake. Experienced in running an end-to-end ETL project and interacting with users globally. Has good knowledge of DW architectural principles and ETL mapping, transformation, workflow designing, batch script development. Python/PySpark: Expert in Python and should be able to efficiently use Python data-science and math packages such as NumPy, Pandas, and Scikit-learn/ Python web framework. Deep experience in developing data processing tasks using pySpark such as reading data from external sources, merge data, perform data enrichment and load into target data destinations. Prior experience with Redshift / Synapse / Snowflake. AWS Infra Architect: 10-15 years of experience as an AWS Cloud Infrastructure administrator role, AWS Cloud Architect, or Solution Architect. 2-3 years of experience as AWS Cloud Architect. Hands-on experience in debugging AWS services like EC2, EMR, S3, Redshift, Lambda etc. Hands-on experience in container orchestration tools like ECS / EKS. Hands-on experience in creating infrastructure using IaC like Cloudformation/Terraform. Data Modeler: 8+ years of experience in commercial Data modeling, data entity defining for developing business insights for life sciences organization. Prior experience in client management and have worked across a variety of projects from data engineering to data operations, to help improve and run clients entire system of business processes and operations, implementing cutting-edge automation technologies. Azure ADF: 5+ years of relevant experience in delivering customer-focused information management solution(s) across Data Lakes, Enterprise Data Warehouses and Enterprise Data Integration projects primarily in MS Azure cloud using Data Factory and Databricks. Snowflake Architect: 10+ years overall EDW (ETL, BI projects) /Cloud Architecture experience, software development experience using object-oriented languages. Expertise in Snowflake advanced concepts like setting up resource monitors, RBAC controls, virtual warehouse sizing, query performance tuning, Zero copy clone, time travel, and understand how to use these features. Business Analyst - Patient Specialty Services: 8-10 years of extensive experience in working on Patient level datasets. Have a fair understanding of Patient data processing within the HIPAA environment, such as Patient data aggregation, tokenization, etc. MDM - Informatica/Reltio: 5-8 years of experience should have hands-on experience working on MDM Projects. Hands-on experience in industry data quality tools like Informatica IDQ, and IBM Data Quality.

Posted 3 weeks ago

Apply

5.0 - 9.0 years

5 - 9 Lacs

Noida, Uttar Pradesh, India

On-site

Foundit logo

Technology Leadership Independently he/she should be able to design, implement, and deliver complex Data Warehousing/Data Lake, Cloud Data Management, and Data Integration project assignments. Technical Design and Development Expertise Any of the ETL tools (Informatica, IICS, Matillion, Data Stage), and hosting technologies like the AWS stack (Redshift, EC2) is mandatory. Any of the BI tools among Tableau, Qlik, Power BI, and MSTR. Informatica MDM, Customer Data Management . Expert knowledge of SQL with the capability to performance tune complex SQL queries in tradition and distributed RDBMS systems is a must. Experience across Python, PySpark, and Unix/Linux Shell Scripting . Project Management Is a must to have. Should be able to create simple to complex project plans in Microsoft Project Plan and think in advance about potential risks and mitigation plans as per project plan. Task Management Should be able to onboard the team on the project plan and delegate tasks to accomplish milestones as per plan. Should be comfortable in discussing and prioritizing work items with team members in an onshore-offshore model. Client Relationship Manage client communication and client expectations independently or with support of reporting manager. Should be able to deliver results back to the Client as per plan. Education Bachelor Equivalent - Other PG Diploma in Management Work Experience We are hiring for the following roles across Data management tech stacks: ETL-Snowflake/AWS/IICS: 5-8 years of experience in ETL tools - IICS, Redshift, Snowflake. Strong experience in AWS/Snowflake technologies - Redshift / Synapse/ Snowflake. Experienced in running an end-to-end ETL project and interacting with users globally. Has good knowledge of DW architectural principles and ETL mapping, transformation, workflow designing, batch script development. Python/PySpark: Expert in Python and should be able to efficiently use Python data-science and math packages such as NumPy, Pandas, and Scikit-learn/ Python web framework. Deep experience in developing data processing tasks using pySpark such as reading data from external sources, merge data, perform data enrichment and load into target data destinations. Prior experience with Redshift / Synapse / Snowflake. AWS Infra Architect: 10-15 years of experience as an AWS Cloud Infrastructure administrator role, AWS Cloud Architect, or Solution Architect. 2-3 years of experience as AWS Cloud Architect. Hands-on experience in debugging AWS services like EC2, EMR, S3, Redshift, Lambda etc. Hands-on experience in container orchestration tools like ECS / EKS. Hands-on experience in creating infrastructure using IaC like Cloudformation/Terraform. Data Modeler: 8+ years of experience in commercial Data modeling, data entity defining for developing business insights for life sciences organization. Prior experience in client management and have worked across a variety of projects from data engineering to data operations, to help improve and run clients entire system of business processes and operations, implementing cutting-edge automation technologies. Azure ADF: 5+ years of relevant experience in delivering customer-focused information management solution(s) across Data Lakes, Enterprise Data Warehouses and Enterprise Data Integration projects primarily in MS Azure cloud using Data Factory and Databricks. Snowflake Architect: 10+ years overall EDW (ETL, BI projects) /Cloud Architecture experience, software development experience using object-oriented languages. Expertise in Snowflake advanced concepts like setting up resource monitors, RBAC controls, virtual warehouse sizing, query performance tuning, Zero copy clone, time travel, and understand how to use these features. Business Analyst - Patient Specialty Services: 8-10 years of extensive experience in working on Patient level datasets. Have a fair understanding of Patient data processing within the HIPAA environment, such as Patient data aggregation, tokenization, etc. MDM - Informatica/Reltio: 5-8 years of experience should have hands-on experience working on MDM Projects. Hands-on experience in industry data quality tools like Informatica IDQ, and IBM Data Quality.

Posted 3 weeks ago

Apply

5.0 - 8.0 years

5 - 8 Lacs

Gurgaon / Gurugram, Haryana, India

On-site

Foundit logo

Technology Leadership Independently he/she should be able to design, implement, and deliver complex Data Warehousing/Data Lake, Cloud Data Management, and Data Integration project assignments. Technical Design and Development Expertise Any of the ETL tools (Informatica, IICS, Matillion, Data Stage), and hosting technologies like the AWS stack (Redshift, EC2) is mandatory. Any of the BI tools among Tableau, Qlik, Power BI, and MSTR. Informatica MDM, Customer Data Management . Expert knowledge of SQL with the capability to performance tune complex SQL queries in tradition and distributed RDBMS systems is a must. Experience across Python, PySpark, and Unix/Linux Shell Scripting . Project Management Is a must to have. Should be able to create simple to complex project plans in Microsoft Project Plan and think in advance about potential risks and mitigation plans as per project plan. Task Management Should be able to onboard the team on the project plan and delegate tasks to accomplish milestones as per plan. Should be comfortable in discussing and prioritizing work items with team members in an onshore-offshore model. Client Relationship Manage client communication and client expectations independently or with support of reporting manager. Should be able to deliver results back to the Client as per plan. Education Bachelor Equivalent - Other PG Diploma in Management Work Experience We are hiring for the following roles across Data management tech stacks: ETL-Snowflake/AWS/IICS: 5-8 years of experience in ETL tools - IICS, Redshift, Snowflake. Strong experience in AWS/Snowflake technologies - Redshift / Synapse/ Snowflake. Experienced in running an end-to-end ETL project and interacting with users globally. Has good knowledge of DW architectural principles and ETL mapping, transformation, workflow designing, batch script development. Python/PySpark: Expert in Python and should be able to efficiently use Python data-science and math packages such as NumPy, Pandas, and Scikit-learn/ Python web framework. Deep experience in developing data processing tasks using pySpark such as reading data from external sources, merge data, perform data enrichment and load into target data destinations. Prior experience with Redshift / Synapse / Snowflake. AWS Infra Architect: 10-15 years of experience as an AWS Cloud Infrastructure administrator role, AWS Cloud Architect, or Solution Architect. 2-3 years of experience as AWS Cloud Architect. Hands-on experience in debugging AWS services like EC2, EMR, S3, Redshift, Lambda etc. Hands-on experience in container orchestration tools like ECS / EKS. Hands-on experience in creating infrastructure using IaC like Cloudformation/Terraform. Data Modeler: 8+ years of experience in commercial Data modeling, data entity defining for developing business insights for life sciences organization. Prior experience in client management and have worked across a variety of projects from data engineering to data operations, to help improve and run clients entire system of business processes and operations, implementing cutting-edge automation technologies. Azure ADF: 5+ years of relevant experience in delivering customer-focused information management solution(s) across Data Lakes, Enterprise Data Warehouses and Enterprise Data Integration projects primarily in MS Azure cloud using Data Factory and Databricks. Snowflake Architect: 10+ years overall EDW (ETL, BI projects) /Cloud Architecture experience, software development experience using object-oriented languages. Expertise in Snowflake advanced concepts like setting up resource monitors, RBAC controls, virtual warehouse sizing, query performance tuning, Zero copy clone, time travel, and understand how to use these features. Business Analyst - Patient Specialty Services: 8-10 years of extensive experience in working on Patient level datasets. Have a fair understanding of Patient data processing within the HIPAA environment, such as Patient data aggregation, tokenization, etc. MDM - Informatica/Reltio: 5-8 years of experience should have hands-on experience working on MDM Projects. Hands-on experience in industry data quality tools like Informatica IDQ, and IBM Data Quality.

Posted 3 weeks ago

Apply

8.0 - 11.0 years

35 - 37 Lacs

Kolkata, Ahmedabad, Bengaluru

Work from Office

Naukri logo

Dear Candidate, We are hiring a Cloud Architect to design and oversee scalable, secure, and cost-efficient cloud solutions. Great for architects who bridge technical vision with business needs. Key Responsibilities: Design cloud-native solutions using AWS, Azure, or GCP Lead cloud migration and transformation projects Define cloud governance, cost control, and security strategies Collaborate with DevOps and engineering teams for implementation Required Skills & Qualifications: Deep expertise in cloud architecture and multi-cloud environments Experience with containers, serverless, and microservices Proficiency in Terraform, CloudFormation, or equivalent Bonus: Cloud certification (AWS/Azure/GCP Architect) Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies

Posted 3 weeks ago

Apply

8.0 - 11.0 years

35 - 37 Lacs

Kolkata, Ahmedabad, Bengaluru

Work from Office

Naukri logo

Dear Candidate, Looking for a Cloud Data Engineer to build cloud-based data pipelines and analytics platforms. Key Responsibilities: Develop ETL workflows using cloud data services. Manage data storage, lakes, and warehouses. Ensure data quality and pipeline reliability. Required Skills & Qualifications: Experience with BigQuery, Redshift, or Azure Synapse. Proficiency in SQL, Python, or Spark. Familiarity with data lake architecture and batch/streaming. Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies