Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
5 - 10 years
50 - 55 Lacs
Bengaluru
Hybrid
WHAT YOULL DO As a member of the team, you will be responsible for developing, testing, and deploying data-driven software products in AWS Cloud. You will work with a team consisting of a data scientist, an enterprise architect, data engineers and business users to enhance products feature sets. Qualification & Skills: Mandatory: Knowledge and experience in audit, expense and firm level data elements Knowledge and experience in audit and compliance products/processes End-to-end product development lifecycle knowledge/exposure Strong in AWS Cloud and associated services like Elastic Beanstalk, SageMaker, EFS, S3, IAM, Glue, Lambda, SQS, SNS, KMS, Encryption, Secret Manager Strong experience in Snowflake Database Operations Strong in SQL and Python Programming Language Strong experience in Web Development Framework (Django) Strong experience in React and associated framework (Next JS, Tailwind etc) Experience in CI/CD pipelines and DevOps methodology Experience in SonarQube integration and best practices Implementation of best security implementation for web applications and cloud infrastructure Knowledge of Wiz.io for security protocols related to AWS Cloud Platform Nice to Have: Knowledge of data architecture, data modeling, best practices and security policies in the Data Management space Basic data science knowledge preferred Experience in KNIME/Tableau/PowerBI Experience & Education: Between 5 to 15 years of IT experience Bachelors/masters degree from an accredited college/university in business related or technology related field
Posted 2 months ago
5 - 7 years
0 - 0 Lacs
Bengaluru
Work from Office
We are seeking a skilled and experienced Data Engineer Lead to join our team. The ideal candidate will have expertise in Apache Spark, PySpark, Python , and AWS services (particularly AWS Glue ). You will be responsible for designing, building, and optimizing ETL processes and data workflows in the cloud, specifically on the AWS platform . Your work will focus on leveraging Spark-based frameworks, Python, and AWS services to efficiently process and manage large datasets. Experience Range - 5 to 7 years Key Responsibilities: Spark & PySpark Development : Design and implement scalable data processing pipelines using Apache Spark and PySpark to support large-scale data transformations. ETL Pipeline Development : Build, maintain, and optimize ETL processes for seamless data extraction, transformation, and loading across various data sources and destinations. AWS Glue Integration : Utilize AWS Glue to create, run, and monitor serverless ETL jobs for data transformations and integrations in the cloud. Python Scripting : Develop efficient, reusable Python scripts to support data manipulation, analysis, and transformation within the Spark and Glue environments. Data Pipeline Optimization : Ensure that all data workflows are optimized for performance, scalability , and cost-efficiency on the AWS Cloud platform. Collaboration : Work closely with data analysts , data scientists , and other engineering teams to create reliable data solutions that support business analytics and decision-making . Documentation & Best Practices : Maintain clear documentation of processes, workflows, and code while adhering to best practices in data engineering , cloud architecture , and ETL design . Required Skills: Expertise in Apache Spark and PySpark for large-scale data processing and transformation. Hands-on experience with AWS Glue for building and managing ETL workflows in the cloud. Strong programming skills in Python , with experience in data manipulation, automation, and integration with Spark and Glue. In-depth knowledge of ETL principles and data pipeline design, including optimization techniques. Proficiency in working with AWS services , such as S3 , Glue , Lambda , and Redshift . Strong skills in writing optimized SQL queries , with a focus on performance tuning. Ability to translate complex business requirements into practical technical solutions . Familiarity with Apache Airflow for orchestrating data workflows. Knowledge of data warehousing concepts and cloud-native analytics tools. Required Skills Aws Glue,Pyspark,Python.
Posted 2 months ago
3 - 5 years
5 - 9 Lacs
Mumbai
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Python (Programming Language) Good to have skills : Oracle Procedural Language Extensions to SQL (PLSQL), AWS Architecture Minimum 3 year(s) of experience is required Educational Qualification : Any technical graduation Summary :As an Application Developer, you will be responsible for designing, building, and configuring applications to meet business process and application requirements using Python. Your typical day will involve working with Oracle Procedural Language Extensions to SQL (PLSQL), AWS Architecture, and other related technologies to develop and maintain applications. Key Responsibilities Design and implementation of end to end with Python in AWS environment. Develop node.js and python code in AWS environment. Create an inspiring team environment with an open communication culture.(For Leads) Monitor team performance and report on metrics (For leads) Discover training needs and provide coaching (For leads) Architecting pilots and proofs-of-concept effort to spur innovation. Working in all stages of the development lifecycle Automation of manual data object creation and test cases (For Leads) Ask smart questions, collaborate, team up, take risks, and champion new ideas (For Leads) Job Description/Skills: Strong hands-on experience in solutioning AWS components. Experience in CI/CD and DevOps pipeline Experience in DevOps Tools and development model Develop node.js and python code in AWS environment. Experience working in AWS components (Step Function, Lambda, EventBridge, S3, SNS, SQS, RDS, Glue, IAM, VPC, Secret Manager, CloudWatch, DynamoDB) Experience in with different workflows of version control like git / bit bucket Experience in code coverage and fixing sonar issues Good knowledge of CI/CD pipelines using AWS/OpenShift Good to have front end UI experience with react JS / Angular Minimum 1 Experience in Agile methodology Deep understanding of database concepts and design for SQL (primarily) and NoSQL (secondarily) -- schema design, optimization, scalability, etc. Good experience with git software version control and good understanding of code branching strategies and organization for code reuseWilling to work in B shift Qualification Any technical graduation
Posted 2 months ago
3 - 5 years
5 - 9 Lacs
Hyderabad
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Python (Programming Language) Good to have skills : Oracle Procedural Language Extensions to SQL (PLSQL) Minimum 3 year(s) of experience is required Educational Qualification : Any graduation Summary :As an Application Developer, you will be responsible for designing, building, and configuring applications to meet business process and application requirements using Python. Your typical day will involve working with Oracle Procedural Language Extensions to SQL (PLSQL) and collaborating with cross-functional teams to deliver impactful data-driven solutions. Key Responsibilities Design and implementation of end to end with Python in AWS environment. Develop node.js and python code in AWS environment. Create an inspiring team environment with an open communication culture.(For Leads) Monitor team performance and report on metrics (For leads) Discover training needs and provide coaching (For leads) Architecting pilots and proofs-of-concept effort to spur innovation. Working in all stages of the development lifecycle Automation of manual data object creation and test cases (For Leads) Ask smart questions, collaborate, team up, take risks, and champion new ideas (For Leads) Job Description/Skills: Strong hands-on experience in solutioning AWS components. Experience in CI/CD and DevOps pipeline Experience in DevOps Tools and development model Develop node.js and python code in AWS environment. Experience working in AWS components (Step Function, Lambda, EventBridge, S3, SNS, SQS, RDS, Glue, IAM, VPC, Secret Manager, CloudWatch, DynamoDB) Experience in with different workflows of version control like git / bit bucket Experience in code coverage and fixing sonar issues Good knowledge of CI/CD pipelines using AWS/OpenShift Good to have front end UI experience with react JS / Angular Minimum 1 Experience in Agile methodology Deep understanding of database concepts and design for SQL (primarily) and NoSQL (secondarily) -- schema design, optimization, scalability, etc. Good experience with git software version control and good understanding of code branching strategies and organization for code reuse Additional Information: The candidate should have a minimum of 3 years of experience in Python. The ideal candidate will possess a strong educational background in computer science or a related field. This position is based at our Hyderabad office.Resource is willing to work in B shift Qualification Any graduation
Posted 2 months ago
3 - 5 years
5 - 8 Lacs
Hyderabad
Work from Office
Position Summary: Data Engineering Senior Analyst demonstrates expertise in data engineering technologies with the focus on engineering, innovation, strategic influence and product mindset. This individual will act as a key contributor of the team to design, build, test and deliver large-scale software applications, systems, platforms, services or technologies in the data engineering space. This individual will have the opportunity to work directly with partner IT and business teams, owning and driving major deliverables across all aspects of software delivery. The candidate will play a key role in automating the processes on Databricks and AWS. They collaborate with business and technology partners in gathering requirements, develop and implement. The individual must have strong analytical and technical skills coupled with the ability to positively influence the delivery of data engineering products. The applicant will be working in a team that demands innovation, cloud-first, self-service-first, and automation-first mindset coupled with technical excellence. The applicant will be working with internal and external stakeholders and customers for building solutions as part of Enterprise Data Engineering and will need to demonstrate very strong technical and communication skills. Delivery Intermediate delivery skills including the ability to deliver work at a steady, predictable pace to achieve commitments, decompose work assignments into small batch releases and contribute to tradeoff and negotiation discussions. Domain Expertise Demonstrated track record of domain expertise including the ability to understand technical concepts necessary to do the job effectively, demonstrate willingness, cooperation, and concern for business issues and possess in-depth knowledge of immediate systems worked on. Problem Solving Proven problem solving skills including debugging skills, allowing you to determine source of issues in unfamiliar code or systems and the ability to recognize and solve repetitive problems rather than working around them, recognize mistakes using them as learning opportunities and break down large problems into smaller, more manageable ones Job Description & Responsibilities: The candidate will be responsible to deliver business needs end to end from requirements to development into production. Through hands-on engineering approach in Databricks environment, this individual will deliver data engineering toolchains, platform capabilities and reusable patterns. The applicant will be responsible to follow software engineering best practices with an automation first approach and continuous learning and improvement mindset. The applicant will ensure adherence to enterprise architecture direction and architectural standards. The applicant should be able to collaborate in a high-performing team environment, and an ability to influence and be influenced by others. Experience Required: 3 to 5 years of experience in software engineering, building data engineering pipelines, middleware and API development and automation More than 3 years of experience in Databricks within an AWS environment Data Engineering experience Experience Desired: Expertise in Agile software development principles and patterns Expertise in building streaming, batch and event-driven architectures and data pipelines Primary Skills: Cloud-based security principles and protocols like OAuth2, JWT, data encryption, hashing data, secret management, etc. Expertise in Big data technologies such as Spark, Hadoop, Databricks, Snowflake, EMR, Glue Good understanding of Kafka, Kafka Streams, Spark Structured streaming, configuration-driven data transformation and curation Expertise in building cloud-native microservices, containers, Kubernetes and platform-as-a-service technologies such as OpenShift, CloudFoundry Experience in multi-cloud software-as-a-service products such as Databricks, Snowflake Experience in Infrastructure-as-Code (IaC) tools such as terraform and AWS cloudformation Experience in messaging systems such as Apache ActiveMQ, WebSphere MQ, Apache Artemis, Kafka, AWS SNS Experience in API and microservices stack such as Spring Boot, Quarkus, Expertise in Cloud technologies such as AWS Glue, Lambda, S3, Elastic Search, API Gateway, CloudFront Experience with one or more of the following programming and scripting languages Python, Scala, JVM-based languages, or JavaScript, and ability to pick up new languages Experience in building CI/CD pipelines using Jenkins, Github Actions Strong expertise with source code management and its best practices Proficient in self-testing of applications, unit testing and use of mock frameworks, test-driven development (TDD) Knowledge on Behavioral Driven Development (BDD) approach Additional Skills: Ability to perform detailed analysis of business problems and technical environments Strong oral and written communication skills Ability to think strategically, implement iteratively and estimate financial impact of design/architecture alternatives Continuous focus on an on-going learning and development
Posted 2 months ago
3 - 5 years
5 - 9 Lacs
Mumbai
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : PySpark Good to have skills : NA Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :The ideal candidate will work in a team environment that demands technical excellence, whose members are expected to hold each other accountable for the overall success of the end product. Focus for this team is on the delivery of innovative solutions to complex problems, but also with a mind to drive simplicity in refining and supporting of the solution by others Job Description & Responsibilities: Be accountable for delivery of business functionality. Work on the AWS cloud to migrate/re-engineer data and applications from on premise to cloud. Responsible for engineering solutions conformant to enterprise standards, architecture, and technologies Provide technical expertise through a hands-on approach, developing solutions that automate testing between systems. Perform peer code reviews, merge requests and production releases. Implement design/functionality using Agile principles. Proven track record of quality software development and an ability to innovate outside of traditional architecture/software patterns when needed. A desire to collaborate in a high-performing team environment, and an ability to influence and be influenced by others. Have a quality mindset, not just code quality but also to ensure ongoing data quality by monitoring data to identify problems before they have business impact. Be entrepreneurial, business minded, ask smart questions, take risks, and champion new ideas. Take ownership and accountability. Experience Required: 3 to 5 years of experience in application program development Experience Desired: Knowledge and/or experience with healthcare information domains. Documented experience in a business intelligence or analytic development role on a variety of large-scale projects. Documented experience working with databases larger than 5TB and excellent data analysis skills. Experience with TDD/BDD Experience working with SPARK and real time analytic frameworks Education and Training Required: Bachelor's degree in Engineering, Computer Science Primary Skills: PYTHON, Databricks, TERADATA, SQL, UNIX, ETL, Data Structures, Looker, Tableau, GIT, Jenkins, RESTful & GraphQL APIs. AWS services such as Glue, EMR, Lambda, Step Functions, CloudTrail, CloudWatch, SNS, SQS, S3, VPC, EC2, RDS, IAM Additional Skills: Ability to rapidly prototype and storyboard/wireframe development as part of application design. Write referenceable and modular code. Willingness to continuously learn & share learnings with others. Ability to communicate design processes, ideas, and solutions clearly and effectively to teams and clients. Ability to manipulate and transform large datasets efficiently. Excellent troubleshooting skills to root cause complex issues Qualification 15 years full time education
Posted 2 months ago
3 - 5 years
6 - 8 Lacs
Hyderabad
Work from Office
Position Summary: Cigna, a leading Health Services company, is looking for an exceptional engineer in our Data & Analytics Engineering organization. The Full Stack Engineer is responsible for the delivery of a business need starting from understanding the requirements to deploying the software into production. This role requires you to be fluent in some of the critical technologies with proficiency in others and have a hunger to learn on the job and add value to the business. Critical attributes of being a Full Stack Engineer, among others, is ownership, eagerness to learn & an open mindset. In addition to Delivery, the Full Stack Engineer should have an automation first and continuous improvement mindset. Person should drive the adoption of CI/CD tools and support the improvement of the tools sets/processes. Job Description & Responsibilities: Design and architect the solution independently Take ownership and accountability Write referenceable & modular code Be fluent in particular areas and have proficiency in many areas Have a passion to learn Have a quality mindset, not just code quality but also to ensure ongoing data quality by monitoring data to identify problems before they have business impact Take risks and champion new ideas Experience Required: 3+ years of experience required in listed skills in Data Engineering role 3+ years of Python scripting experience 3+ years of Data Management & SQL expertise Teradata & Snowflake experience strongly preferred 3+ years being part of Agile teams Scrum Experience Desired: Experience with version management tools Git preferred Experience with BDD and TDD development methodologies Experience working in an agile CI/CD environments; Jenkins experience preferred Knowledge and/or experience with Health care information domains preferred Education and Training Required: Bachelors degree (or equivalent) required Primary Skills: Expertise with big data technologies - Hadoop, HiveQL, Spark (Scala/Python) Expertise on Cloud technologies AWS (S3, Glue, Terraform, Lambda, Aurora, Redshift, EMR) Additional Skills: Experience working on Analytical Models and their deployment / production enable? ment via data & analytics pipelines
Posted 2 months ago
2 - 7 years
4 - 9 Lacs
Bengaluru
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : AWS Glue Good to have skills : NA Minimum 2 year(s) of experience is required Educational Qualification : 15 years full time education Summary:As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. You will be responsible for overseeing the entire application development process and ensuring its successful implementation. This role requires strong leadership skills and the ability to effectively communicate with stakeholders and team members. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work-related problems.- Lead the design, development, and implementation of applications.- Act as the primary point of contact for all application-related matters.- Collaborate with stakeholders to gather requirements and understand business needs.- Provide technical guidance and mentorship to the development team.- Ensure the successful delivery of high-quality applications.- Identify and mitigate risks and issues throughout the development process. Professional & Technical Skills:- Must To Have Skills:Proficiency in AWS Glue.- Strong understanding of cloud computing concepts and architecture.- Experience with AWS services such as S3, Lambda, and Glue.- Hands-on experience with ETL (Extract, Transform, Load) processes.- Familiarity with data warehousing and data modeling concepts.- Good To Have Skills:Experience with AWS Redshift.- Knowledge of SQL and database management systems.- Experience with data integration and data migration projects. Additional Information:- The candidate should have a minimum of 2 years of experience in AWS Glue.- This position is based at our Bengaluru office.- A 15 years full-time education is required. Qualifications 15 years full time education
Posted 2 months ago
6 - 8 years
10 - 12 Lacs
Hyderabad
Work from Office
Position Summary: S.E Lead Analyst demonstrates expertise in data engineering technologies with the focus on engineering, innovation, strategic influence and product mindset. This individual will act as key contributor of the team to design, build, test and deliver large-scale software applications, systems, platforms, services or technologies in the data engineering space. This individual will have the opportunity to work directly with partner IT and business teams, owning and driving major deliverables across all aspects of software delivery. The candidate will play a key role in automating the processes on Databricks and AWS. They collaborate with business and technology partners in gathering requirements, develop and implement. The individual must have strong analytical and technical skills coupled with the ability to positively influence the delivery of data engineering products. The applicant will be working in a team that demands innovation, cloud-first, self-service-first, and automation-first mindset coupled with technical excellence. The applicant will be working with internal and external stakeholders and customers for building solutions as part of Enterprise Data Engineering and will need to demonstrate very strong technical and communication skills. Job Description & Responsibilities: The candidate will be responsible to deliver business needs end to end from requirements to development into production. Through hands-on engineering approach in Databricks environment, this individual will deliver data engineering toolchains, platform capabilities and reusable patterns. The applicant will be responsible to follow software engineering best practices with an automation first approach and continuous learning and improvement mindset. The applicant will ensure adherence to enterprise architecture direction and architectural standards. The applicant should be able to collaborate in a high-performing team environment, and an ability to influence and be influenced by others. Experience Required: More than 6-8 years of experience in software engineering, building data engineering pipelines, middleware and API development and automation More than 3 years of experience in Databricks within an AWS environment Experience Desired: Expertise in Agile software development principles and patterns Expertise in building streaming, batch and event-driven architectures and data pipelines Primary Skills: Cloud-based security principles and protocols like OAuth2, JWT, data encryption, hashing data, secret management, etc. Expertise in Big data technologies such as Spark, Hadoop, Databricks, Snowflake, EMR, Glue Good understanding of Kafka, Kafka Streams, Spark Structured streaming, configuration-driven data transformation and curation Expertise in building cloud-native microservices, containers, Kubernetes and platform-as-a-service technologies such as OpenShift, CloudFoundry Experience in multi-cloud software-as-a-service products such as Databricks, Snowflake Experience in Infrastructure-as-Code (IaC) tools such as terraform and AWS cloudformation Experience in messaging systems such as Apache ActiveMQ, WebSphere MQ, Apache Artemis, Kafka, AWS SNS Experience in API and microservices stack such as Spring Boot, Quarkus, Expertise in Cloud technologies such as AWS Glue, Lambda, S3, Elastic Search, API Gateway, CloudFront Experience with one or more of the following programming and scripting languages Python, Scala, JVM-based languages, or JavaScript, and ability to pick up new languages Experience in building CI/CD pipelines using Jenkins, Github Actions Strong expertise with source code management and its best practices Proficient in self-testing of applications, unit testing and use of mock frameworks, test-driven development (TDD) Knowledge on Behavioral Driven Development (BDD) approach Additional Skills: Ability to perform detailed analysis of business problems and technical environments Strong oral and written communication skills
Posted 2 months ago
5 - 7 years
8 - 10 Lacs
Hyderabad
Work from Office
Position Overview: The Data Platform and Analytics Services (DPaAS) team in Finance IT is looking for a Dev Ops Senior Analyst to provide their contribution and guidance for shared cloud infrastructure/data platform for Corporate Applications and US Market Solutions. This is a key growth area for the Finance organization and will lay a strong foundation for the cloud-based data platform to enable accurate and timely insights for the stakeholders to make informed and strategic decisions. The DevOps Analyst will be responsible for building a shared framework of cloud services as well as related tools and processes. This will enable the build out of new data platforms or the enhancement of existing data platform. This will be done by leveraging AWS and/or Oracle cloud. The ideal candidate will have experience in design, development and automation of scalable cloud infrastructure supporting data workloads. The Senior DevOps analyst must possess a combination of systems, technology and architecture experience optimizing cost, reliability, security, performance, and operational efficiency in order to drive innovation in both DevOps technology and processes. Responsibilities: Collaborate with Solution/Data Architect to provision and automate cloud infrastructure for hosting data applications using Terraform. Create, maintain, and enhance pipelines for Continuous Integration ( CI ) and Continuous Deployment ( CD) of infrastructure and application code. Implement Cigna's security standards and controls governing cloud-based systems by partnering with information protection/security team. Adhere to Cignas cloud compliance requirements for AWS accounts. Monitor and log important network, system and application activity utilizing industry standard tools. Troubleshoot issues based on alerts and logs. Administer Linux and Windows based systems. Develop KPIs that provide in-depth visibility into system health. Establish interfaces with SAML/SSO providers in Cigna. Ensure high availability, scalability, and security of production systems. Maintain awareness of industry best practices in area of DevOps and evaluate their application to the data platform. Qualifications: Required Skills: Ideal candidate must have a broad and deep technical understanding of the technologies in this field, including but not limited to IaaC, RaaS, PaaC. Experience creating integration and deployment pipelines for data platforms is required. Experience in AWS compute (EC2, Lambda), networking (VPC, Subnets, Firewalls etc.), storage (S3, EBS, EFS), security (IAM), encryption (KMS, TLS), data and analytics (Redshift, Glue, RDS ), AI/ML (SageMaker), containers (ECS, EKS) are needed for this role. Experience in Open-Source tools/technologies such as Airflow, Jenkins, Git, Github actions, Terraform, Ansible, Prometheus, Grafana, Python etc. AWS certifications (for example, AWS Certified DevOps Engineer) preferred. Desired Skills: Excellent written and verbal communication skills to effectively communicate across teams and roles. Excellent analytical / troubleshooting skills and willingness to learn and apply innovative technologies. Ability to work collaboratively in a fast-paced, agile environment. Demonstrable ability to deliver projects on time, with high quality, and within budget. Required Experience & Education: Bachelor of Science in Computer Science, Software Engineering, IT or related technical discipline, or equivalent combination of training and experience. 5+ years of hands-on technical expertise in DevOps using AWS cloud. Work Shift : 1 to 10 PM IST - Door to Door Pick and drop
Posted 2 months ago
5 - 10 years
7 - 12 Lacs
Hyderabad
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Python (Programming Language) Good to have skills : Oracle Procedural Language Extensions to SQL (PLSQL), AWS Architecture Minimum 5 year(s) of experience is required Educational Qualification : Any technical graduation Summary :As an Application Developer, you will be responsible for designing, building, and configuring applications to meet business process and application requirements using Python. Your typical day will involve working with Oracle Procedural Language Extensions to SQL (PLSQL) and AWS Architecture to develop and maintain applications. Key Responsibilities Design and implementation of end to end with Python in AWS environment. Develop node.js and python code in AWS environment. Create an inspiring team environment with an open communication culture.(For Leads) Monitor team performance and report on metrics (For leads) Discover training needs and provide coaching (For leads) Architecting pilots and proofs-of-concept effort to spur innovation. Working in all stages of the development lifecycle Automation of manual data object creation and test cases (For Leads) Ask smart questions, collaborate, team up, take risks, and champion new ideas (For Leads) Job Description/Skills: Strong hands-on experience in solutioning AWS components. Experience in CI/CD and DevOps pipeline Experience in DevOps Tools and development model Develop node.js and python code in AWS environment. Experience working in AWS components (Step Function, Lambda, EventBridge, S3, SNS, SQS, RDS, Glue, IAM, VPC, Secret Manager, CloudWatch, DynamoDB) Experience in with different workflows of version control like git / bit bucket Experience in code coverage and fixing sonar issues Good knowledge of CI/CD pipelines using AWS/OpenShift Good to have front end UI experience with react JS / Angular Minimum 1 Experience in Agile methodology Deep understanding of database concepts and design for SQL (primarily) and NoSQL (secondarily) -- schema design, optimization, scalability, etc. Good experience with git software version control and good understanding of code branching strategies and organization for code reuseWilling to work in B shift Qualifications Any technical graduation
Posted 2 months ago
3 - 7 years
10 - 20 Lacs
Pune
Work from Office
Job Description We are looking for data engineers who have the right attitude, aptitude, skills, empathy, compassion, and hunger for learning. Build products in the data analytics space. A passion for shipping high-quality data products, interest in the data products space; curiosity about the bigger picture of building a company, product development and its people. Roles and Responsibilities Develop and manage robust ETL pipelines using Apache Spark (Scala) Understand park concepts, performance optimization techniques and governance tools Develop a highly scalable, reliable, and high-performance data processing pipeline to extract, transform and load data from various systems to the Enterprise Data Warehouse/Data Lake/Data Mesh Collaborate cross-functionally to design effective data solutions Implement data workflows utilizing AWS Step Functions for efficient orchestration. Leverage AWS Glue and Crawler for seamless data cataloging and automation Monitor, troubleshoot, and optimize pipeline performance and data quality Maintain high coding standards and produce thorough documentation. Contribute to high-level (HLD) and low-level (LLD) design discussions Technical Skills Minimum 3 years of progressive experience building solutions in Big Data environments. Have a strong ability to build robust and resilient data pipelines which are scalable, fault tolerant and reliable in terms of data movement. 3+ years of hands-on expertise in Python, Spark and Kafka. Strong command of AWS services like EMR, Redshift, Step Functions, AWS Glue, and AWS Crawler. Strong hands on capabilities on SQL and NoSQL technologies. Sound understanding of data warehousing, modeling, and ETL concepts Familiarity with High-Level Design (HLD) and Low-Level Design (LLD) principles Excellent written and verbal communication skills.
Posted 2 months ago
2 - 7 years
4 - 9 Lacs
Tamil Nadu
Work from Office
Description Responsibilities Must have 1. Strong expertise in SQL Python PySpark. 2. Good knowledge on Data warehousing techniques. 3. Good knowledge on AWS Big Data services and snowflake. Design develop and maintain scalable data pipelines and architectures for data processing and integration. Implement data streaming solutions to handle real-time data ingestion and processing. Utilize Python and PySpark to develop and optimize data workflows. Leverage AWS services such as S3 Redshift Glue Kinesis and Lambda for data storage processing and analytics. Collaborate with data scientists analysts and other stakeholders to understand data requirements and deliver solutions. Ensure data quality integrity and security across all data pipelines. Monitor and troubleshoot data pipeline performance and reliability. Mentor junior data engineers and provide technical guidance. Stay updated with the latest trends and technologies in data engineering and streaming. Explore and implement Generative AI (GenAI) solutions where applicable Qualifications Bachelors degree in computer science engineering. 8+ years experience in ETL development role. Experience working in AWS PySpark and real time streaming pipeline. Ability to communicate effectively. Strong process documentation skills. Named Job Posting? (if Yes - needs to be approved by SCSC) Additional Details Global Grade C Level To Be Defined Named Job Posting? (if Yes - needs to be approved by SCSC) No Remote work possibility Yes Global Role Family 60242 (P) Data Management Local Role Name 60327 Data Engineer Local Skills 62276 AWS Batch Languages RequiredEnglish Role Rarity To Be Defined
Posted 2 months ago
2 - 7 years
4 - 9 Lacs
Maharashtra
Work from Office
Description Skills Required Strong experience working in Cloud-based Engineering teams preferably using AWS. Solid understanding of dimensional modelling. Thorough understanding of Python and SQL for data exploration and transformation ideally within Spark environments (Pyspark SparkSQL) such as within Glue jobs. Strong written and oral communication skills and presentation skills with the ability to translate business needs to system and software/data requirements. Strong analysis and analytical skills with attention to detail within complex systems and datasets particularly on relationships between them. Solid understanding of version control systems specifically Git. Named Job Posting? (if Yes - needs to be approved by SCSC) Additional Details Global Grade B Level To Be Defined Named Job Posting? (if Yes - needs to be approved by SCSC) No Remote work possibility No Global Role Family 60236 (P) Software Engineering Local Role Name 6504 Developer / Software Engineer Local Skills 45331 AWS Cloud Formation Languages RequiredEnglish Role Rarity To Be Defined
Posted 2 months ago
2 - 7 years
4 - 9 Lacs
Bengaluru
Work from Office
Data Support Engineer Location: Bangalore Experience 6+yrs Rate 30 LPA AMVikas POC:Swati Patil Key Responsibilities: Database Development & Support: Write and optimize SQL queries, stored procedures, and views for data retrieval and transformation. Develop and maintain data pipelines to support business intelligence and analytics requirements. Support SQL Server and Amazon Redshift environments for data storage, transformation, and analytics. Ensure data integrity, security, and quality across all database solutions. Operational Support: Monitor ETL logs and troubleshoot data pipeline issues to minimize downtime. Perform data validation and reconciliation to ensure data accuracy. Maintain Excel reports and updates as part of regular operational tasks. Development & Automation: Utilize Python for automation, data processing, and workflow enhancements. Work with AWS services (e.g., S3, Redshift, Glue) to implement cloud-based data solutions. Assist in maintaining and optimizing legacy PHP code for database interactions (preferred). Experience & Qualifications: Minimum 2 years of experience in database development, support, or data engineering roles. Strong SQL skills with experience in query optimization, stored procedures, and data provisioning. Hands-on experience with relational databases (SQL Server) and cloud data warehouses (Redshift). Python programming skills for automation and data transformation. AWS expertise in services like S3, Redshift, and Glue (preferred). Knowledge of Databricks and big data processing is a plus. Experience with data validation and reconciliation processes. Exposure to CI/CD, version control, and data governance best practices . Knowledge of PHP for database-related development and maintenance (preferred but not mandatory). Preferred Skills: Experience in business intelligence and analytics environments . Ability to analyze data and provide insights and recommendations . Understanding of ETL processes and data pipeline monitoring . Strong troubleshooting skills for database and ETL issues .
Posted 2 months ago
2 - 7 years
4 - 9 Lacs
Pune
Work from Office
AWS Data Engineer 8-11 years AWS Data Engineer, Python Python, S3, RDS, Glue, Lambda, IAM, SNS, SQL , Quicksight
Posted 2 months ago
2 - 7 years
4 - 9 Lacs
Dadra and Nagar Haveli, Chandigarh
Work from Office
Data engineer Skills Required: Strong proficiency in Pyspark , Scala , and Python Experience with AWS Glue Experience Required: Minimum 5 years of relevant experience Location: Available across all UST locations Notice Period: Immediate joiners (Candidates available to join by 31st January 2025 ) SO - 22978624 Location - Chandigarh,Dadra & Nagar Haveli,Daman,Diu,Goa,Haveli,Jammu,Lakshadweep,Nagar,New Delhi,Puducherry,Sikkim
Posted 2 months ago
2 - 7 years
4 - 9 Lacs
Hyderabad
Work from Office
AWS Glue Developer Location :Hyd/Pune/Kolkatta Experience6+ years Budget:17-26 LPA Mandatory skill : AWS Glue, AWS Step Function, Pyspark,Python NP- Immediate Grade:C1/C2 Pyspark SparkSQL SQL and Glue. ii. AWS cloud experience iii. Good understanding of dimensional modelling iv. Good understanding DevOps CloudOps DataOps CI/CD & with a SRE mindset v. Understanding of Lakehouse and DW architecture vi. strong analysis and analytical skills vii. understanding of version control systems specifically Git viii. strong in software engineering APIs Microservices etc. Soft skills i. written and oral communication skills ii. ability to translate business needs to system and s
Posted 2 months ago
6 - 10 years
15 - 30 Lacs
Chennai, Hyderabad, Kolkata
Work from Office
About Client Hiring for One of Our Multinational Corporations! Job Description Job Title: Snowflake Developer Qualification : Graduate Relevant Experience: 6 to 8 years Must-Have Skills: Snowflake Python SQL Roles and Responsibilities: Design, develop, and optimize Snowflake-based data solutions Write and maintain Python scripts for data processing and automation Work with cross-functional teams to implement scalable data pipelines Ensure data security and performance tuning in Snowflake Debug and troubleshoot database and data processing issues Location: Kolkata,Hyderabad, Chennai, Mumbai Notice Period: Upto 60 days Mode of Work: On-site -- Thanks & Regards Nushiba Taniya M Black and White Business Solutions Pvt.Ltd. Bangalore,Karnataka,India. Direct Number:08067432408 |Nushiba@blackwhite.in |www.blackwhite.in
Posted 2 months ago
10 - 17 years
25 - 40 Lacs
Pune
Hybrid
AWS Data Architect Data engineering Data warehousing AWS Services
Posted 2 months ago
10 - 17 years
25 - 40 Lacs
Bengaluru
Hybrid
AWS Data Architect Data engineering Data warehousing AWS Services
Posted 2 months ago
10 - 20 years
25 - 40 Lacs
Bangalore Rural
Hybrid
AWS Redshift, AWS EMR, AWS S3, AWS Glue, AWS DMS, AWS Lambda, SNS, SQS, AWS Kinesis, IAM, VPC etc. migrating on-premise data warehouse to AWS cloud. metadata management,Governance, Data Quality, MDM, Lineage, Data Catalog etc.
Posted 2 months ago
10 - 20 years
25 - 40 Lacs
Pune
Hybrid
AWS Redshift, AWS EMR, AWS S3, AWS Glue, AWS DMS, AWS Lambda, SNS, SQS, AWS Kinesis, IAM, VPC etc. migrating on-premise data warehouse to AWS cloud. metadata management,Governance, Data Quality, MDM, Lineage, Data Catalog etc.
Posted 2 months ago
7 - 12 years
18 - 25 Lacs
Hyderabad
Work from Office
Job description Job Title: Java AWS Developer Category: Software Development/ Engineering Main location: Hyderabad Employment Type: Full Time Qualification: Bachelor's degree in Computer Science or related field Key skills: AWS Developer, AWS Data Engineer, AWS API Gateway, AWS RDS, AWS network, AWS Glue, AWS Lambda. We are seeking a highly skilled AWS Developer with 8-12 years of hands-on experience across a range of AWS services, including API creation, network architecture, and security. The ideal candidate will be proficient in AWS Lambda, Glue, API Gateway, Lattice, RDS, and Databricks, with a proven ability to design and deploy scalable serverless applications. This role requires expertise in data processing, data warehousing, and API development, with a strong focus on security, optimization, and teamwork. Responsibilities: Design, develop, and deploy scalable, secure applications using AWS cloud-native technologies. • Use AWS services like Lambda, Kinesis, Redshift, and API Gateway to build high-performance solutions. • Architect serverless applications using AWS Lambda for event-driven computing. • Build and maintain real-time or batched data pipelines • Design and optimize data warehouses with Amazon Redshift for large-scale data storage and analysis. • Implement event-driven architectures for efficient communication between various AWS services. • Write and optimize PySpark code for the data processes to transform and load data. • Ensure security and compliance with AWS best practices, including VPC, IAM, and encryption. • Identify performance, scalability, and cost-saving opportunities within AWS environments.. • Create and maintain technical documentation for AWS architecture and processes. • Good understanding and working knowledge of GIT CI/CD pipelines for automated testing and deployment. • Troubleshoot and resolve issues with AWS services to minimize production downtime. Must-Have Skills: • Proficiency in a wide range of Java(Min 7 Years) AWS with a focus on Lambda, Kinesis, and Redshift (Min 2.5Years) • Strong skills in designing, implementing, and optimizing data warehousing solutions using Amazon Redshift.
Posted 2 months ago
2 - 4 years
5 - 10 Lacs
Chennai, Delhi NCR, Mumbai (All Areas)
Work from Office
Primary Skill Good To Have Skill SQL, ETL, Hadoop, Pyspark, Python, Glue, Lambda, Basic AWS Services AWS, Redshift
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
In recent years, the demand for professionals with expertise in glue technologies has been on the rise in India. Glue jobs involve working with tools and platforms that help connect various systems and applications together seamlessly. This article aims to provide an overview of the glue job market in India, including top hiring locations, average salary ranges, career progression, related skills, and interview questions for aspiring job seekers.
Here are 5 major cities in India actively hiring for glue roles: 1. Bangalore 2. Pune 3. Hyderabad 4. Chennai 5. Mumbai
The estimated salary range for glue professionals in India varies based on experience levels. Entry-level professionals can expect to earn around INR 4-6 lakhs per annum, while experienced professionals with several years of experience can earn between INR 12-18 lakhs per annum.
In the field of glue technologies, a typical career progression may include roles such as: - Junior Developer - Senior Developer - Tech Lead - Architect
Apart from expertise in glue technologies, professionals in this field are often expected to have or develop skills in: - Data integration - ETL (Extract, Transform, Load) processes - Database management - Programming languages (e.g., Python, Java)
Here are 25 interview questions for glue roles: - What is Glue in the context of data integration? (basic) - Explain the difference between ETL and ELT. (basic) - How would you handle data quality issues in a glue job? (medium) - Can you explain how Glue works with Apache Spark? (medium) - What is the significance of schema evolution in Glue? (medium) - How do you optimize Glue jobs for performance? (medium) - Describe a scenario where you had to troubleshoot a failed Glue job. (medium) - What is a bookmark in Glue and how is it used? (medium) - How does Glue handle schema inference? (medium) - Have you worked with AWS Glue DataBrew? If so, explain your experience. (medium) - Explain how Glue handles schema evolution. (advanced) - How does Glue support job bookmarks for incremental processing? (advanced) - What are the differences between Glue ETL and Glue DataBrew? (advanced) - How do you handle nested JSON structures in Glue transformations? (advanced) - Explain a complex Glue job you have designed and implemented. (advanced) - How does Glue handle dynamic frame operations? (advanced) - What is the role of a Glue DynamicFrame in data transformation? (advanced) - How do you handle schema changes in Glue jobs? (advanced) - Explain how Glue can be integrated with other AWS services. (advanced) - What are the limitations of Glue that you have encountered in your projects? (advanced) - How do you monitor and debug Glue jobs in production environments? (advanced) - Describe your experience with Glue job scheduling and orchestration. (advanced) - How do you ensure security in Glue jobs that handle sensitive data? (advanced) - Explain the concept of lazy evaluation in Glue. (advanced) - How do you handle dependencies between Glue jobs in a workflow? (advanced)
As you prepare for interviews and explore opportunities in the glue job market in India, remember to showcase your expertise in glue technologies, related skills, and problem-solving abilities. With the right preparation and confidence, you can land a rewarding career in this dynamic and growing field. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2