Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
8.0 - 13.0 years
1 - 4 Lacs
Pune
Work from Office
Roles & Responsibilities: Provides expert level development system analysis design and implementation of applications using AWS services specifically using Python for Lambda Translates technical specifications and/or design models into code for new or enhancement projects (for internal or external clients). Develops code that reuses objects is well-structured includes sufficient comments and is easy to maintain Provides follow up Production support when needed. Submits change control requests and documents. Participates in design code and test inspections throughout the life cycle to identify issues and ensure methodology compliance. Participates in systems analysis activities including system requirements analysis and definition e.g. prototyping. Participates in other meetings such as those for use case creation and analysis. Performs unit testing and writes appropriate unit test plans to ensure requirements are satisfied. Assists in integration systems acceptance and other related testing as needed. Ensures developed code is optimized in order to meet client performance specifications associated with page rendering time by completing page performance tests. Technical Skills Required Experience in building large scale batch and data pipelines with data processing frameworks in AWS cloud platform using PySpark (on EMR) & Glue ETL Deep experience in developing data processing data manipulation tasks using PySpark such as reading data from external sources merge data perform data enrichment and load in to target data destinations. Experience in deployment and operationalizing the code using CI/CD tools Bit bucket and Bamboo Strong AWS cloud computing experience. Extensive experience in Lambda S3 EMR Redshift Should have worked on Data Warehouse/Database technologies for at least 8 years. 7. Any AWS certification will be an added advantage.
Posted 1 week ago
4.0 - 9.0 years
12 - 16 Lacs
Kochi
Work from Office
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies. Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers.
Posted 1 week ago
5.0 - 7.0 years
12 - 16 Lacs
Bengaluru
Work from Office
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 5 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on AWS; Exposure to streaming solutions and message brokers like Kafka technologies. Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers AWS S3 , Redshift , and EMR for data storage and distributed processing. AWS Lambda , AWS Step Functions , and AWS Glue to build serverless, event-driven data workflows and orchestrate ETL processes
Posted 1 week ago
9.0 - 12.0 years
35 - 40 Lacs
Bengaluru
Work from Office
We are seeking an experienced AWS Architect with a strong background in designing and implementing cloud-native data platforms. The ideal candidate should possess deep expertise in AWS services such as S3, Redshift, Aurora, Glue, and Lambda, along with hands-on experience in data engineering and orchestration tools. Strong communication and stakeholder management skills are essential for this role. Key Responsibilities Design and implement end-to-end data platforms leveraging AWS services. Lead architecture discussions and ensure scalability, reliability, and cost-effectiveness. Develop and optimize solutions using Redshift, including stored procedures, federated queries, and Redshift Data API. Utilize AWS Glue and Lambda functions to build ETL/ELT pipelines. Write efficient Python code and data frame transformations, along with unit testing. Manage orchestration tools such as AWS Step Functions and Airflow. Perform Redshift performance tuning to ensure optimal query execution. Collaborate with stakeholders to understand requirements and communicate technical solutions clearly. Required Skills & Qualifications Minimum 9 years of IT experience with proven AWS expertise. Hands-on experience with AWS services: S3, Redshift, Aurora, Glue, and Lambda . Mandatory experience working with AWS Redshift , including stored procedures and performance tuning. Experience building end-to-end data platforms on AWS . Proficiency in Python , especially working with data frames and writing testable, production-grade code. Familiarity with orchestration tools like Airflow or AWS Step Functions . Excellent problem-solving skills and a collaborative mindset. Strong verbal and written communication and stakeholder management abilities. Nice to Have Experience with CI/CD for data pipelines. Knowledge of AWS Lake Formation and Data Governance practices.
Posted 1 week ago
4.0 - 7.0 years
15 - 25 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Role & responsibilities Looking For AWS data Eng-Immediate joiners for Hyderabad,Chennai,Noida,Pune,Bangalore locations. Mandatory Skill-Python,Pyspark,SQL,Aws Glue Strong technical skills in services like S3,Athena, Lambda, RGlue and Glue(Pyspark), SQL, Data Warehousing, Informatica, OracleDesign, develop, and implement custom solutions within the Collibra platform to support data governance initiatives. Preferred candidate profile Snowflake, Agile methodology and Tableau. Proficiency in Python/Scala, Spark architecture, complex SQL, and RDBMS. Hands-on experience with ETL tools (e.g., Informatica) and SCD1, SCD2. 2-6 years of DWH, AWS Services and ETL design knowledge. Develop ETL processes for data ingestion, transformation, and loading into data lakes and warehouses. Collaborate with data scientists and analysts to ensure data availability for analytics and reporting.
Posted 1 week ago
4.0 - 7.0 years
5 - 9 Lacs
Bengaluru
Work from Office
PySpark, Python, SQL Strong focus on big data processing,which is core to data engineering. AWS Cloud Services (Lambda, Glue, S3, IAM) Indicates working with cloud-based data pipelines. Airflow, GitHub Essential for orchestration and version control in data workflows.
Posted 1 week ago
3.0 - 5.0 years
10 - 15 Lacs
Pune
Work from Office
Job Description: Sr. Software Engineer (Python) Company: Karini AI Location: Pune (Wakad) Experience Required: 3 - 5 years Compensation: Not Disclosed Role Overview : We are seeking a skilled Sr. Software Engineer with advanced Python skills with a passion for product development and a knowledge of Machine Learning and/or Generative AI. You will collaborate with a talented team of engineers and AI Engineers to design and develop high-quality Generative AI platform on AWS. Key Responsibilities : Designed and developed backend applications and APIs using Python. Work on product development, building robust, scalable, and maintainable solutions. Integrate Generative AI models into production environments to solve real-world problems. Collaborate with cross-functional teams, including data scientists, product managers, and designers, to understand requirements and deliver solutions. Optimize application performance and ensure scalability across cloud environments. Write clean, maintainable, and efficient code while adhering to best practices. Requirements : 3-5 years of hands-on experience in Product development. Demonstrable experience in understanding advanced python concepts for building scalable systems. Demonstrable experience working with FastAPI server in production environment. Familiarity with unit testing, version control and CI/CD Good understanding of Machine Learning concepts and frameworks (e.g., TensorFlow, PyTorch). Experience with integrating and deploying ML models into applications is a plus. Knowledge of database systems (SQL/NoSQL) and RESTful API development. Exposure to containerization (Docker) and cloud platforms (AWS). Strong problem-solving skills and attention to detail. Preferred Qualifications : Bachelor of Engineering in Computer Science, Information Technology, or any other engineering discipline. M.Tech, M.E. & B.E-Computer Science preferred. Hands-on experience in product-focused organizations. Experience working with data pipelines or data engineering tasks. Knowledge of CI/CD pipelines and DevOps practices. Familiarity with version control tools like Git. Interest or experience in Generative AI or NLP applications. What We Offer: Top-tier compensation package, aligned with industry benchmarks. Comprehensive employee benefits including Provident Fund (PF) and medical insurance. Experience working with Ex-AWS founding team with the fastest growing company. Work on innovative AI-driven products that solve complex problems. Collaborate with a talented and passionate team in a dynamic environment. Opportunities for professional growth and skill enhancement in Generative AI A supportive, inclusive, and flexible work culture that values creativity and ownership.
Posted 1 week ago
5.0 - 8.0 years
10 - 15 Lacs
Kochi
Remote
We are looking for a skilled AWS Cloud Engineer with a minimum of 5 years of hands-on experience in managing and implementing cloud-based solutions on AWS. The ideal candidate will have expertise in AWS core services such as S3, EC2, MSK, Glue, DMS, and SageMaker, along with strong programming and containerization skills using Python and Docker.Design, implement, and manage scalable AWS cloud infrastructure solutions. Hands-on experience with AWS services: S3, EC2, MSK, Glue, DMS, and SageMaker. Develop, deploy, and maintain Python-based applications in cloud environments. Containerize applications using Docker and manage deployment pipelines. Troubleshoot infrastructure and application issues, review designs, and code solutions. Ensure high availability, performance, and security of cloud resources. Collaborate with cross-functional teams to deliver reliable and scalable solutions.
Posted 1 week ago
5.0 - 10.0 years
20 - 25 Lacs
Hyderabad, Delhi / NCR
Hybrid
Develop ETL pipelines using SQL and C# and Python Performance Tuning, Design scalable DB architecture Maintain Technical documentation Required Candidate profile 5+ years in SQL T-SQL, performance tuning, developing ETL processes. Hands on C# and WPF will be a plus Experience in AWS and Azure is must
Posted 1 week ago
5.0 - 8.0 years
15 - 27 Lacs
Hyderabad
Work from Office
Dear Candidate, We are pleased to invite you to participate in the EY GDS face to face hiring Event for the position of AWS Data Engineer. Role: AWS Data Engineer Experience Required: 5-8 Years Location - Hyderabad Mode of interview - Face to Face JD - Technical Skills: • Must have Strong experience in AWS Data Services like Glue , Lambda, Even bridge, Kinesis, S3/ EMR , Redshift , RDS, Step functions, Airflow & Pyspark • Strong exposure to IAM, Cloud Trail , Cluster optimization , Python & SQL • Should have expertise in Data design, STTM, understanding of Data models , Data component design, Automated testing, Code Coverage, UAT support , Deployment and go live • Experience with version control systems like SVN, Git. Create and manage AWS Glue crawlers and jobs to automate data cataloging and ingestion processes across various structured and unstructured data sources. • Strong experience with AWS Glue building ETL pipelines, managing crawlers, and working with Glue data catalogue. • Proficiency in AWS Redshift designing and managing Redshift clusters, writing complex SQL queries, and optimizing query performance. • Enable data consumption from reporting and analytics business applications using AWS services (ex: QuickSight, Sagemaker, JDBC / ODBC connectivity, etc.) Kindly confirm your availability by applying to this Job
Posted 1 week ago
9.0 - 14.0 years
20 - 30 Lacs
Bengaluru
Hybrid
My profile :- linkedin.com/in/yashsharma1608 Hiring manager profile :- on payroll of - https://www.nyxtech.in/ Clinet : Brillio PAYROLL AWS Architect Primary skills Aws (Redshift, Glue, Lambda, ETL and Aurora), advance SQL and Python , Pyspark Note : -Aurora Database mandatory skill Experience 9 + yrs Notice period Immediate joiner Location Any Brillio location (Preferred is Bangalore) Budget – 30 LPA Job Description: year of IT experiences with deep expertise in S3, Redshift, Aurora, Glue and Lambda services. Atleast one instance of proven experience in developing Data platform end to end using AWS Hands-on programming experience with Data Frames, Python, and unit testing the python as well as Glue code. Experience in orchestrating mechanisms like Airflow, Step functions etc. Experience working on AWS redshift is Mandatory. Must have experience writing stored procedures, understanding of Redshift data API and writing federated queries Experience in Redshift performance tunning.Good in communication and problem solving. Very good stakeholder communication and management
Posted 1 week ago
12.0 - 17.0 years
14 - 19 Lacs
Hyderabad
Work from Office
W e are seeking a highly skilled , hands -on and technically proficient Test Automation Engineering Manager with strong experience in data quality , data integration , and a specific focus on semantic layer validation . This role combines technical ownership of automated data testing solutions with team leadership responsibilities, ensuring that the data infrastructure across platforms remains accurate , reliable, and high performing . As a leader in the QA and Data Engineering space, you will be responsible for building robust automated testing frameworks, validating GraphQL -based data layers, and driving the teams technical growth. Your work will ensure that all data flows, transformations, and API interactions meet enterprise-grade quality standards across the data lifecycle. Y ou will be responsible for the end-to-end design and development of test automation frameworks, working collaboratively with your team. As the delivery owner for test automation, your primary responsibilities will include building and automating comprehensive validation frameworks for semantic layer testing, GraphQL API validation, and schema compliance , ensuring alignment with data quality, performance, and integration reliability standards. You will also work closely with data engineers, product teams, and platform architects to validate data contracts and integration logic, supporting the integrity and trustworthiness of enterprise data solutions. This is a highly technical and hands-on role, with strong emphasis on automation, data workflow validation , and the seamless integration of testing practices into CI/CD pipelines . Roles & Responsibilities: Design and implement robust data validation frameworks focused on the semantic layer, ensuring accurate data model, schema compliance, and contract adherence across services and platforms. Build and automate end-to-end data pipeline validations across ingestion, transformation, and consumption layers using Databricks, Apache Spark, and AWS services such as S3, Glue, Athena, and Lake Formation. Lead test automation initiatives by developing scalable, modular test frameworks and embedding them into CI/CD pipelines for continuous validation of semantic models, API integrations, and data workflows. Validate GraphQL APIs by testing query/mutation structures, schema compliance, and end-to-end integration accuracy using tools like Postman, Python, and custom test suites. Oversee UI and visualization testing for tools like Tableau, Power BI, and custom front-end dashboards, ensuring consistency with backend data through Selenium with Python and backend validations. Define and drive the overall QA strategy with emphasis on performance, reliability, and semantic data accuracy, while setting up alerting and reporting mechanisms for test failures, schema issues, and data contract violations. Collaborate closely with product managers, data engineers, developers, and DevOps teams to align quality assurance initiatives with business goals and agile release cycles. Actively contribute to architecture and design discussions, ensuring quality and testability are embedded from the earliest stages of development. Mentor and manage QA engineers, fostering a collaborative environment focused on technical excellence, knowledge sharing, and continuous professional growth. Must-Have Skills: Team Leadership Experience is also required. Strong 6+ years of experience in Requested Data Ops/Testing is required 7+ to 12 years of Overall experience is expected in Test Automation. Strong experience in designing and implementing test automation frameworks integrated with CI/CD pipelines. Expertise in validating data pipelines at the syntactic layer, including schema checks, null/duplicate handling, and transformation validation. Hands-on experience with Databricks, Apache Spark, and AWS services (S3, Glue, Athena, Lake Formation). Proficiency in Python, PySpark, and SQL for writing validation scripts and automation logic. Solid understanding of GraphQL APIs, including schema validation and query/mutation testing. Experience with API testing tools like Postman and Python-based test frameworks. Proficient in UI and visualization testing using Selenium with Python, especially for tools like Tableau, Power BI, or custom dashboards. Familiarity with CI/CD tools such as Jenkins, GitHub Actions, or GitLab CI for test orchestration. Ability to implement alerting and reporting for test failures, anomalies, and validation issues. Strong background in defining QA strategies and leading test automation initiatives in data-centric environments. Excellent collaboration and communication skills, with the ability to work closely with cross-functional teams in Agile settings. Mentor and manage QA engineers, fostering a collaborative environment focused on technical excellence, knowledge sharing, and continuous professional growth. Good-to-Have Skills: Experience with data governance tools such as Apache Atlas, Collibra, or Alation Understanding of DataOps methodologies and practices Contributions to internal quality dashboards or data observability systems Awareness of metadata-driven testing approaches and lineage-based validations Experience working with agile Testing methodologies such as Scaled Agile. Familiarity with automated testing frameworks like Selenium, JUnit, TestNG, or PyTest. Education and Professional Certifications Bachelors/Masters degree in computer science and engineering preferred. Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills. EQUAL OPPORTUNITY STATEMENT We provide reasonable accommodations for individuals with disabilities during the application, interview process, job functions, and employment benefits. Contact us to request an accommodation .
Posted 1 week ago
6.0 - 9.0 years
8 - 11 Lacs
Hyderabad
Work from Office
Role Description: We are seeking a highly skilled, hands-on Senior QA & Test Automation Specialist (Test Automation Engineer)with strong experience in data validation , ETL testing , test automation , and QA process ownership . This role combines deep technical execution with a solid foundation in QA best practices including test planning, defect tracking, and test lifecycle management . You will be responsible for designing and executing manual and automated test strategies for complex real-time and batch data pipelines , contributing to the design of automation frameworks , and ensuring high-quality data delivery across our AWS and Databricks-based analytics platforms . The role is highly technical and hands-on , with a strong focus on automation, metadata validation , and ensuring data governance practices are seamlessly integrated into development pipelines. Roles & Responsibilities: Collaborate with the QA Manager to design and implement end-to-end test strategies for data validation, semantic layer testing, and GraphQL API validation. Perform manual validation of data pipelines, including source-to-target data mapping, transformation logic, and business rule verification. Develop and maintain automated data validation scripts using Python and PySpark for both real-time and batch pipelines. Contribute to the design and enhancement of reusable automation frameworks, with components for schema validation, data reconciliation, and anomaly detection. Validate semantic layers (e.g., Looker, dbt models) and GraphQL APIs, ensuring data consistency, compliance with contracts, and alignment with business expectations. Write and manage test plans, test cases, and test data for structured, semi-structured, and unstructured data. Track, manage, and report defects using tools like JIRA, ensuring thorough root cause analysis and timely resolution. Collaborate with Data Engineers, Product Managers, and DevOps teams to integrate tests into CI/CD pipelines and enable shift-left testing practices. Ensure comprehensive test coverage for all aspects of the data lifecycle, including ingestion, transformation, delivery, and consumption. Participate in QA ceremonies (standups, planning, retrospectives) and continuously contribute to improving the QA process and culture. Experience building or maintaining test data generators Contributions to internal quality dashboards or data observability systems Awareness of metadata-driven testing approaches and lineage-based validations Experience working with agile Testing methodologies such as Scaled Agile. Familiarity with automated testing frameworks like Selenium, JUnit, TestNG, or PyTest. Must-Have Skills: 69 years of experience in QA roles, with at least 3+ years of strong exposure to data pipeline testing and ETL validation. Strong in SQL, Python, and optionally PySpark comfortable with writing complex queries and validation scripts. Practical experience with manual validation of data pipelines and source-to-target testing. Experience in validating GraphQL APIs, semantic layers (Looker, dbt, etc.), and schema/data contract compliance. Familiarity with data integration tools and platforms such as Databricks, AWS Glue, Redshift, Athena, or BigQuery. Strong understanding of test planning, defect tracking, bug lifecycle management, and QA documentation. Experience working in Agile/Scrum environments with standard QA processes. Knowledge of test case and defect management tools (e.g., JIRA, TestRail, Zephyr). Strong understanding of QA methodologies, test planning, test case design, and defect lifecycle management. Deep hands-on expertise in SQL, Python, and PySpark for testing and automating validation. Proven experience in manual and automated testing of batch and real-time data pipelines. Familiarity with data processing and analytics stacks: Databricks, Spark, AWS (Glue, S3, Athena, Redshift). Experience with bug tracking and test management tools like JIRA, TestRail, or Zephyr. Ability to troubleshoot data issues independently and collaborate with engineering for root cause analysis. Experience integrating automated tests into CI/CD pipelines (e.g., Jenkins, GitHub Actions). Experience validating data from various file formats such as JSON, CSV, Parquet, and Avro Strong ability to validate and automate data quality checks: schema validation, null checks, duplicates, thresholds, and transformation validation Hands-on experience with API testing using Postman, pytest, or custom automation scripts Good-to-Have Skills: Experience with data governance tools such as Apache Atlas, Collibra, or Alation Familiarity with monitoring/observability tools such as Datadog, Prometheus, or Cloud Watch Education and Professional Certifications Bachelors/Masters degree in computer science and engineering preferred. Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills.
Posted 1 week ago
3.0 - 7.0 years
4 - 7 Lacs
Hyderabad
Work from Office
What you will do Let’s do this. Let’s change the world. In this vital role you will be responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and driving data governance initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing Be a key team member that assists in design and development of the data pipeline Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Collaborate with multi-functional teams to understand data requirements and design solutions that meet business needs Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate and communicate effectively with product teams Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve complex data-related challenges Adhere to standard methodologies for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Doctorate degree OR Master’s degree and 4 to 6 years of Computer Science, IT or related field OR Bachelor’s degree and 6 to 8 years of Computer Science, IT or related field OR Diploma and 10 to 12 years of Computer Science, IT or related field Preferred Qualifications: Functional Skills: Must-Have Skills Proficiency in Python, PySpark, and Scala for data processing and ETL (Extract, Transform, Load) workflows, with hands-on experience in using Databricks for building ETL pipelines and handling big data processing Experience with data warehousing platforms such as Amazon Redshift, or Snowflake. Strong knowledge of SQL and experience with relational (e.g., PostgreSQL, MySQL) databases. Familiarity with big data frameworks like Apache Hadoop, Spark, and Kafka for handling large datasets. Experienced with software engineering best-practices, including but not limited to version control (GitLab, Subversion, etc.), CI/CD (Jenkins, GITLab etc.), automated unit testing, and Dev Ops Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA) Good-to-Have Skills: Experience with cloud platforms such as AWS particularly in data services (e.g., EKS, EC2, S3, EMR, RDS, Redshift/Spectrum, Lambda, Glue, Athena) Strong understanding of data modeling, data warehousing, and data integration concepts Understanding of machine learning pipelines and frameworks for ML/AI models Professional Certifications (please mention if the certification is preferred or required for the role): AWS Certified Data Engineer (preferred) Databricks Certified (preferred) Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills Equal opportunity statement Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now for a career that defies imagination Objects in your future are closer than they appear. Join us. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 1 week ago
8.0 - 11.0 years
7 - 11 Lacs
Hyderabad
Work from Office
HIH - Software Engineering Associate Advisor Position Overview The successful candidate will be a member of our US medical Integration Solutions ETL team. They will play a major role in the design and development if the ETL application in support of various portfolio projects. Responsibilities Analyze business requirements and translate into ETL architecture and data rules Serve as advisor and subject matter expert on project teams Manage both employees and consultants on multiple ETL projects. Oversee and review all design and coding from developers to ensure they follow company standards and best practices, as well as architectural direction Assist in data analysis and metadata management Test planning and execution Effectively operate within a team of technical and business professionals Asses new talent and mentor direct reports on best practices Review all designs and code from developers Qualifications Desired Skills & Experience: 8 - 11 Years of Experience in Java and Python, PySpark to support new development as well as support existing 7+ Years of Experience with Cloud technologies, specifically AWS Experience in AWS services such as Lambda, Glue, s3, MWAA, API Gateway and Route53, DynamoDB, RDS MySQL, SQS, CloudWatch, Secrete Manager, KMS, IAM, EC2 and Auto Scaling Group, VPC and Security Groups Experience with Boto3, Pandas and Terraforms for building Infrastructure as a Code Experience with IBM Datastage ETL tool Experience with CD /CI methodologies and processing and the development of these processes DevOps experience Knowledge in writing SQL Data mappingsource to target target to multiple formats Experience in the development of data extraction and load processes in a parallel framework Understanding of normalized and de-normalized data repositories Ability to define ETL standards & processes SQL Standards / Processes / Tools: Mapping of data sources ETL Development, monitoring, reporting and metrics Focus on data quality Experience with DB2/ZOS, Oracle, SQL Server, Teradata and other database environments Unix experience Excellent problem solving and organizational skills Strong teamwork and interpersonal skills and ability to communicate with all management levels Leads others toward technical accomplishments and collaborative project team efforts Very strong communication skills, both verbal and written, including technical writing Strong analytical and conceptual skills Location & Hours of Work (Specify whether the position is remote, hybrid, in-office and where the role is located as well as the required hours of work) About Evernorth Health Services Evernorth Health Services, a division of The Cigna Group, creates pharmacy, care and benefit solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention and treatment of illness and disease more accessible to millions of people. Join us in driving growth and improving lives.
Posted 1 week ago
3.0 - 7.0 years
2 - 6 Lacs
Hyderabad, Pune, Gurugram
Work from Office
Location Pune, Hyderabad, Gurgaon, Bangalore [Hybrid] : Python, Pyspark, SQL, AWS Services - AWS Glue, S3, IAM, Athena, AWS CloudFormation, AWS Code Pipeline, AWS Lambda, Transfer Family, AWS Lake Formation, and CloudWatch, CI/CD automation of AWS CloudFormation stacks Not Ready to Apply Join our talent pool and we'll reach out when a job fits your skills.
Posted 1 week ago
5.0 - 8.0 years
3 - 7 Lacs
Bengaluru
Work from Office
Job Information Job Opening ID ZR_1634_JOB Date Opened 12/12/2022 Industry Technology Job Type Work Experience 5-8 years Job Title AWS-BIGDATA-DEVELOPER City Bangalore Province Karnataka Country India Postal Code 560001 Number of Positions 4 Roles and Responsibilities: Experience in GLUE AWS Experience with one or more of the followingSpark, Scala, Python, and/or R . Experience in API development with NodeJS Experience with AWS (S3, EC2) or other cloud provider Experience in Data Virtualization tools like Dremio and Athena is a plus Should be technically proficient in Big Data concepts Should be technically proficient in Hadoop and noSQL (MongoDB) Good communication and documentation skills check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 week ago
6.0 - 10.0 years
1 - 4 Lacs
Pune
Work from Office
Job Information Job Opening ID ZR_1594_JOB Date Opened 29/11/2022 Industry Technology Job Type Work Experience 6-10 years Job Title AWS GLUE Engineer City Pune Province Maharashtra Country India Postal Code 411001 Number of Positions 4 Roles & Responsibilities: Provides expert level development system analysis design and implementation of applications using AWS services specifically using Python for Lambda Translates technical specifications and/or design models into code for new or enhancement projects (for internal or external clients). Develops code that reuses objects is well-structured includes sufficient comments and is easy to maintain Provides follow up Production support when needed. Submits change control requests and documents. Participates in design code and test inspections throughout the life cycle to identify issues and ensure methodology compliance. Participates in systems analysis activities including system requirements analysis and definition e.g. prototyping. Participates in other meetings such as those for use case creation and analysis. Performs unit testing and writes appropriate unit test plans to ensure requirements are satisfied. Assists in integration systems acceptance and other related testing as needed. Ensures developed code is optimized in order to meet client performance specifications associated with page rendering time by completing page performance tests. Technical Skills Required Experience in building large scale batch and data pipelines with data processing frameworks in AWS cloud platform using PySpark (on EMR) & Glue ETL Deep experience in developing data processing data manipulation tasks using PySpark such as reading data from external sources merge data perform data enrichment and load in to target data destinations. Experience in deployment and operationalizing the code using CI/CD tools Bit bucket and Bamboo Strong AWS cloud computing experience. Extensive experience in Lambda S3 EMR Redshift Should have worked on Data Warehouse/Database technologies for at least 8 years. 7. Any AWS certification will be an added advantage. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 week ago
5.0 - 8.0 years
2 - 6 Lacs
Chennai
Work from Office
Job Information Job Opening ID ZR_1668_JOB Date Opened 19/12/2022 Industry Technology Job Type Work Experience 5-8 years Job Title Sr. AWS Developer City Chennai Province Tamil Nadu Country India Postal Code 600001 Number of Positions 4 AWS Lambda Glue Kafka/Kinesis RDBMS Oracle, MySQL, RedShift, PostgreSQL, Snowflake Gateway Cloudformation / Terraform Step Functions Cloudwatch Python Pyspark Job role & responsibilities: Looking for a Software Engineer/Senior Software engineer with hands on experience in ETL projects and extensive knowledge in building data processing systems with Python, pyspark and Cloud technologies(AWS). Experience in development in AWS Cloud (S3, Redshift, Aurora, Glue, Lambda, Hive, Kinesis, Spark, Hadoop/EMR) Required Skills: Amazon Kinesis, Amazon Aurora, Data Warehouse, SQL, AWS Lambda, Spark, AWS QuickSight Advanced Python Skills Data Engineering ETL and ELT Skills Experience of Cloud Platforms (AWS or GCP or Azure) Mandatory skills Datawarehouse, ETL, SQL, Python, AWS Lambda, Glue, AWS Redshift. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 week ago
6.0 - 9.0 years
8 - 11 Lacs
Mumbai, Hyderabad, Chennai
Work from Office
About the Role: Grade Level (for internal use): 10 S&P Dow Jones Indices The Role S&P Dow Jones Indices a global leader in providing investable and benchmark indices to the financial markets, is looking for a Java Application Developer to join our technology team. The Location Mumbai/Hyderabad/Chennai The Team You will be part of global technology team comprising of Dev, QA and BA teams and will be responsible for analysis, design, development and testing. The Impact You will be working on one of the core technology platforms responsible for the end of day calculation as well as dissemination of index values. Whats in it for you You will have the opportunity to work on the enhancements to the existing index calculation system as well as implement new methodologies as required. Responsibilities Design and development of Java applications for SPDJI web sites and its feeder systems. Participate in multiple software development processes including Coding, Testing, De-bugging & Documentation. Develop software applications based on clear business specifications. Work on new initiatives and support existing Index applications. Perform Application & System Performance tuning and troubleshoot performance issues. Develop web based applications and build rich front-end user interfaces. Build applications with object oriented concepts and apply design patterns. Integrate in-house applications with various vendor software platforms. Setup development environment / sandbox for application development. Check-in application code changes into the source repository. Perform unit testing of application code and fix errors. Interface with databases to extract information and build reports. Effectively interact with customers, business users and IT staff. What were looking for Basic Qualification Bachelors degree in Computer Science, Information Systems or Engineering is required, or in lieu, a demonstrated equivalence in work experience. (6 to 9) years of IT experience in application development and support. Strong Experience with Java, J2EE, JMS &.EJBs Advanced SQL & basic PL/SQL programming Basic networking knowledge / Unix scripting Exposure to UI technologies like react JS Basic understanding of AWS cloud (EC2, EMR, Lambda, S3, Glue, etc.) Excellent communication and interpersonal skills are essential, with strong verbal and writing proficiencies. Preferred Qualification Experience working with large datasets in Equity, Commodities, Forex, Futures and Options asset classes. Experience with Index/Benchmarks or Asset Management or Trading platforms. Basic Knowledge of User Interface design & development using JQuery, HTML5 & CSS.
Posted 1 week ago
5.0 - 9.0 years
7 - 11 Lacs
Hyderabad
Work from Office
About the Role: Grade Level (for internal use): 11 The Role: Lead Software Engineering The Team: Our team is responsible for the architecture, design, development, and maintenance of technology solutions to support the Sustainability business unit within Market Intelligence and other divisions. Our program is built on a foundation of inclusivity, enablement, and adaptability and respect which fosters an environment of open-communication and trust. We take pride in each team members accountability and responsibility to move us forward in our strategic initiatives. Our work is collaborative, we work transparently with others within our business unit and others across the entire organization. The Impact: As a Lead, Cloud Engineering at S&P Global, you will be instrumental in streamlining the software development and deployment of our applications to meet the needs of our business. Your work ensures seamless integration and continuous delivery, enhancing the platform's operational capabilities to support our business units. You will collaborate with software engineers and data architects to automate processes, improve system reliability, and implement monitoring solutions. Your contributions will be vital in maintaining high availability security and performance standards, ultimately leading to the delivery of impactful, data-driven solutions. Whats in it for you: Career Development: Build a meaningful career with a leading global company at the forefront of technology. Dynamic Work Environment: Work in an environment that is dynamic and forward-thinking, directly contributing to innovative solutions. Skill Enhancement: Enhance your software development skills on an enterprise-level platform. Versatile Experience: Gain full-stack experience and exposure to cloud technologies. Leadership Opportunities: Mentor peers and influence the products future as part of a skilled team. Key Responsibilities: Design and develop scalable cloud applications using various cloud services. Collaborate with cross-functional teams to define, design, and deliver new features. Implement cloud security best practices and ensure compliance with industry standards. Monitor and optimize application performance and reliability in the cloud environment. Troubleshoot and resolve issues related to our applications and services. Stay updated with the latest cloud technologies and trends. Manage our cloud instances and their lifecycle, to guarantee a high degree of reliability, security, scalability, and confidence at any given time. Design and implement CI/CD pipelines to automate software delivery and infrastructure changes. Collaborate with development and operations teams to improve collaboration and productivity. Manage and optimize cloud infrastructure and services. Implement configuration management tools and practices. Ensure security best practices are followed in the deployment process. What Were Looking For: Bachelor's degree in Computer Science or a related field. Minimum of 10+ years of experience in a cloud engineering or related role. Proven experience in cloud development and deployment. Proven experience in agile and project management. Expertise with cloud services (AWS, Azure, Google Cloud). Experience in EMR, EKS, Glue, Terraform, Cloud security, Proficiency in programming languages such as Python, Java, Scala, Spark Strong Implementation experience in AWS services (e.g. EC2, ECS, ELB, RDS, EFS, EBS, VPC, IAM, CloudFront, CloudWatch, Lambda, S3. Proficiency in scripting languages such as Bash, Python, or PowerShell. Experience with CI/CD tools like Azure CI/CD. Experience in SQL and MS SQL Server. Knowledge of containerization technologies like Docker, Kubernetes. Nice to have - Knowledge of GitHub Actions, Redshift and machine learning frameworks Excellent problem-solving and communication skills. Ability to quickly, efficiently, and effectively define and prototype solutions with continual iteration within aggressive product deadlines. Demonstrate strong communication and documentation skills for both technical and non-technical audiences.
Posted 1 week ago
6.0 - 11.0 years
25 - 30 Lacs
Bengaluru
Hybrid
Mandatory Skills : Data engineer , AWS Athena, AWS Glue,Redshift,Datalake,Lakehouse,Python,SQL Server Must Have Experience: 6+ years of hands-on data engineering experience Expertise with AWS services: S3, Redshift, EMR, Glue, Kinesis, DynamoDB Building batch and real-time data pipelines Python, SQL coding for data processing and analysis Data modeling experience using Cloud based data platforms like Redshift, Snowflake, Databricks Design and Develop ETL frameworks Nice-to-Have Experience : ETL development using tools like Informatica, Talend, Fivetran Creating reusable data sources and dashboards for self-service analytics Experience using Databricks for Spark workloads or Snowflake Working knowledge of Big Data Processing CI/CD setup Infrastructure-as-code implementation Any one of the AWS Professional Certification
Posted 1 week ago
5.0 - 10.0 years
2 - 6 Lacs
Gurugram
Work from Office
Skills: Primary Skills: Enhancements, new development, defect resolution, and production support of ETL development using AWS native services Integration of data sets using AWS services such as Glue and Lambda functions. Utilization of AWS SNS to send emails and alerts Authoring ETL processes using Python and PySpark ETL process monitoring using CloudWatch events Connecting with different data sources like S3 and validating data using Athena. Experience in CI/CD using GitHub Actions Proficiency in Agile methodology Extensive working experience with Advanced SQL and a complex understanding of SQL. Competencies / Experience: Deep technical skills in AWS Glue (Crawler, Data Catalog)5 years. Hands-on experience with Python and PySpark3 years. PL/SQL experience3 years CloudFormation and Terraform2 years CI/CD GitHub actions1 year Experience with BI systems (PowerBI, Tableau)1 year Good understanding of AWS services like S3, SNS, Secret Manager, Athena, and Lambda2 years
Posted 2 weeks ago
2.0 - 7.0 years
2 - 7 Lacs
Hyderabad / Secunderabad, Telangana, Telangana, India
On-site
In this vital role you will be responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and driving data governance initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing Be a key team member that assists in design and development of the data pipeline Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Collaborate with multi-functional teams to understand data requirements and design solutions that meet business needs Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate and communicate effectively with product teams Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve complex data-related challenges Adhere to standard methodologies for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Doctorate degree OR Masters degree and 4 to 6 years of Computer Science, IT or related field OR Bachelors degree and 6 to 8 years of Computer Science, IT or related field OR Diploma and 10 to 12 years of Computer Science, IT or related field Must-Have Skills Proficiency in Python, PySpark, and Scala for data processing and ETL (Extract, Transform, Load) workflows, with hands-on experience in using Databricks for building ETL pipelines and handling big data processing Experience with data warehousing platforms such as Amazon Redshift, or Snowflake. Strong knowledge of SQL and experience with relational (e.g., PostgreSQL, MySQL) databases. Familiarity with big data frameworks like Apache Hadoop, Spark, and Kafka for handling large datasets. Experienced with software engineering best-practices, including but not limited to version control (GitLab, Subversion, etc.), CI/CD (Jenkins, GITLab etc.), automated unit testing, and Dev Ops Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA) Skills: Experience with cloud platforms such as AWS particularly in data services (e.g., EKS, EC2, S3, EMR, RDS, Redshift/Spectrum, Lambda, Glue, Athena) Strong understanding of data modeling, data warehousing, and data integration concepts Understanding of machine learning pipelines and frameworks for ML/AI models Professional Certifications (please mention if the certification is preferred or required for the role): AWS Certified Data Engineer (preferred) Databricks Certified (preferred) Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills
Posted 2 weeks ago
4.0 - 6.0 years
4 - 6 Lacs
Hyderabad / Secunderabad, Telangana, Telangana, India
On-site
In this vital role you will be responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and driving data governance initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing Be a key team member that assists in design and development of the data pipeline Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Collaborate with multi-functional teams to understand data requirements and design solutions that meet business needs Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate and communicate effectively with product teams Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve complex data-related challenges Adhere to standard methodologies for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Doctorate degree OR Masters degree and 4 to 6 years of Computer Science, IT or related field OR Bachelors degree and 6 to 8 years of Computer Science, IT or related field OR Diploma and 10 to 12 years of Computer Science, IT or related field Must-Have Skills Proficiency in Python, PySpark, and Scala for data processing and ETL (Extract, Transform, Load) workflows, with hands-on experience in using Databricks for building ETL pipelines and handling big data processing Experience with data warehousing platforms such as Amazon Redshift, or Snowflake. Strong knowledge of SQL and experience with relational (e.g., PostgreSQL, MySQL) databases. Familiarity with big data frameworks like Apache Hadoop, Spark, and Kafka for handling large datasets. Experienced with software engineering best-practices, including but not limited to version control (GitLab, Subversion, etc.), CI/CD (Jenkins, GITLab etc.), automated unit testing, and Dev Ops Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA) Skills: Experience with cloud platforms such as AWS particularly in data services (e.g., EKS, EC2, S3, EMR, RDS, Redshift/Spectrum, Lambda, Glue, Athena) Strong understanding of data modeling, data warehousing, and data integration concepts Understanding of machine learning pipelines and frameworks for ML/AI models Professional Certifications (please mention if the certification is preferred or required for the role): AWS Certified Data Engineer (preferred) Databricks Certified (preferred) Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
In recent years, the demand for professionals with expertise in glue technologies has been on the rise in India. Glue jobs involve working with tools and platforms that help connect various systems and applications together seamlessly. This article aims to provide an overview of the glue job market in India, including top hiring locations, average salary ranges, career progression, related skills, and interview questions for aspiring job seekers.
Here are 5 major cities in India actively hiring for glue roles: 1. Bangalore 2. Pune 3. Hyderabad 4. Chennai 5. Mumbai
The estimated salary range for glue professionals in India varies based on experience levels. Entry-level professionals can expect to earn around INR 4-6 lakhs per annum, while experienced professionals with several years of experience can earn between INR 12-18 lakhs per annum.
In the field of glue technologies, a typical career progression may include roles such as: - Junior Developer - Senior Developer - Tech Lead - Architect
Apart from expertise in glue technologies, professionals in this field are often expected to have or develop skills in: - Data integration - ETL (Extract, Transform, Load) processes - Database management - Programming languages (e.g., Python, Java)
Here are 25 interview questions for glue roles: - What is Glue in the context of data integration? (basic) - Explain the difference between ETL and ELT. (basic) - How would you handle data quality issues in a glue job? (medium) - Can you explain how Glue works with Apache Spark? (medium) - What is the significance of schema evolution in Glue? (medium) - How do you optimize Glue jobs for performance? (medium) - Describe a scenario where you had to troubleshoot a failed Glue job. (medium) - What is a bookmark in Glue and how is it used? (medium) - How does Glue handle schema inference? (medium) - Have you worked with AWS Glue DataBrew? If so, explain your experience. (medium) - Explain how Glue handles schema evolution. (advanced) - How does Glue support job bookmarks for incremental processing? (advanced) - What are the differences between Glue ETL and Glue DataBrew? (advanced) - How do you handle nested JSON structures in Glue transformations? (advanced) - Explain a complex Glue job you have designed and implemented. (advanced) - How does Glue handle dynamic frame operations? (advanced) - What is the role of a Glue DynamicFrame in data transformation? (advanced) - How do you handle schema changes in Glue jobs? (advanced) - Explain how Glue can be integrated with other AWS services. (advanced) - What are the limitations of Glue that you have encountered in your projects? (advanced) - How do you monitor and debug Glue jobs in production environments? (advanced) - Describe your experience with Glue job scheduling and orchestration. (advanced) - How do you ensure security in Glue jobs that handle sensitive data? (advanced) - Explain the concept of lazy evaluation in Glue. (advanced) - How do you handle dependencies between Glue jobs in a workflow? (advanced)
As you prepare for interviews and explore opportunities in the glue job market in India, remember to showcase your expertise in glue technologies, related skills, and problem-solving abilities. With the right preparation and confidence, you can land a rewarding career in this dynamic and growing field. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2