Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
4 - 9 years
16 - 27 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Role & responsibilities 1. Strong experience in AWS Data Engineer 2. Experience in Python/Pyspark 3. Experience in EMR,Glue,athena,Redshift,lamda
Posted 1 month ago
4 - 6 years
10 - 14 Lacs
Bengaluru
Work from Office
Job Description: We are looking for a self-motivated, highly skilled and experienced AI/ML Engineer to be part of our growing team. You will be responsible for developing and deploying cutting-edge machine learning models to solve real-world problems. Your responsibilities will include data preparation, model training, evaluation, and deployment, as well as collaborating with data scientists and software engineers to ensure our AI solutions are effective and scalable. As a Machine Learning Engineer, you will be responsible for developing and optimizing pipelines for both inference and training processes. Your expertise will be crucial in Amazon SageMaker, you need to build, train, and deploy machine learning and foundation models at scale with infrastructure. Experience Level: ~ 4 years. Key Responsibilities: Utilize AI solutions and tools provided by AWS to build segmentation models based on customer behavior and usage patterns. Automatically generate periodic reports. Develop functionality for defining reusable segmentation criteria tailored to marketing objectives. Required Skill Set: Hands-on experience with AWS S3, Lambda, Glue, SageMaker, Athena, QuickSight, etc. Python programming, conceptual understanding of ML algorithms, deep learning techniques, and prior experience with AWS is required. Understanding of serverless architectures and event-driven processing flows. Prior experience in working AI solutions and tools provided by AWS is must. Qualifications: Bachelor or Master's degree in Computer Science or related field. Prior Industry Experience in machine learning frameworks or projects is must.
Posted 1 month ago
2 - 6 years
12 - 16 Lacs
Pune
Work from Office
As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Design and Develop Data Solutions, Design and implement efficient data processing pipelines using AWS services like AWS Glue, AWS Lambda, Amazon S3, and Amazon Redshift. Develop and manage ETL (Extract, Transform, Load) workflows to clean, transform, and load data into structured and unstructured storage systems. Build scalable data models and storage solutions in Amazon Redshift, DynamoDB, and other AWS services. Data IntegrationIntegrate data from multiple sources including relational databases, third-party APIs, and internal systems to create a unified data ecosystem. Work with data engineers to optimize data workflows and ensure data consistency, reliability, and performance. Automation and OptimizationAutomate data pipeline processes to ensure efficiency Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing
Posted 1 month ago
2 - 5 years
4 - 8 Lacs
Pune
Work from Office
About The Role The candidate must possess knowledge relevant to the functional area, and act as a subject matter expert in providing advice in the area of expertise, and also focus on continuous improvement for maximum efficiency. It is vital to focus on the high standard of delivery excellence, provide top-notch service quality and develop successful long-term business partnerships with internal/external customers by identifying and fulfilling customer needs. He/she should be able to break down complex problems into logical and manageable parts in a systematic way, and generate and compare multiple options, and set priorities to resolve problems. The ideal candidate must be proactive, and go beyond expectations to achieve job results and create new opportunities. He/she must positively influence the team, motivate high performance, promote a friendly climate, give constructive feedback, provide development opportunities, and manage career aspirations of direct reports. Communication skills are key here, to explain organizational objectives, assignments, and the big picture to the team, and to articulate team vision and clear objectives. Process ManagerRoles and responsibilities: Designing and implementing scalable, reliable, and maintainable data architectures on AWS. Developing data pipelines to extract, transform, and load (ETL) data from various sources into AWS environments. Creating and optimizing data models and schemas for performance and scalability using AWS services like Redshift, Glue, Athena, etc. Integrating AWS data solutions with existing systems and third-party services. Monitoring and optimizing the performance of AWS data solutions, ensuring efficient query execution and data retrieval. Implementing data security and encryption best practices in AWS environments. Documenting data engineering processes, maintaining data pipeline infrastructure, and providing support as needed. Working closely with cross-functional teams including data scientists, analysts, and stakeholders to understand data requirements and deliver solutions. Technical and Functional Skills: Typically, a bachelors degree in Computer Science, Engineering, or a related field is required, along with 5+ years of experience in data engineering and AWS cloud environments. Strong experience with AWS data services such as S3, EC2, Redshift, Glue, Athena, EMR, etc Proficiency in programming languages commonly used in data engineering such as Python, SQL, Scala, or Java. Experience in designing, implementing, and optimizing data warehouse solutions on Snowflake/ Amazon Redshift. Familiarity with ETL tools and frameworks (e.g., Apache Airflow, AWS Glue) for building and managing data pipelines. Knowledge of database management systems (e.g., PostgreSQL, MySQL, Amazon Redshift) and data lake concepts. Understanding of big data technologies such as Hadoop, Spark, Kafka, etc., and their integration with AWS. Proficiency in version control tools like Git for managing code and infrastructure as code (e.g., CloudFormation, Terraform). Ability to analyze complex technical problems and propose effective solutions. Strong verbal and written communication skills for documenting processes and collaborating with team members and stakeholders.
Posted 1 month ago
1 - 4 years
2 - 6 Lacs
Pune
Work from Office
About The Role The candidate must possess knowledge relevant to the functional area, and act as a subject matter expert in providing advice in the area of expertise, and also focus on continuous improvement for maximum efficiency. It is vital to focus on the high standard of delivery excellence, provide top-notch service quality and develop successful long-term business partnerships with internal/external customers by identifying and fulfilling customer needs. He/she should be able to break down complex problems into logical and manageable parts in a systematic way, and generate and compare multiple options, and set priorities to resolve problems. The ideal candidate must be proactive, and go beyond expectations to achieve job results and create new opportunities. He/she must positively influence the team, motivate high performance, promote a friendly climate, give constructive feedback, provide development opportunities, and manage career aspirations of direct reports. Communication skills are key here, to explain organizational objectives, assignments, and the big picture to the team, and to articulate team vision and clear objectives. Process ManagerRoles and responsibilities: Designing and implementing scalable, reliable, and maintainable data architectures on AWS. Developing data pipelines to extract, transform, and load (ETL) data from various sources into AWS environments. Creating and optimizing data models and schemas for performance and scalability using AWS services like Redshift, Glue, Athena, etc. Integrating AWS data solutions with existing systems and third-party services. Monitoring and optimizing the performance of AWS data solutions, ensuring efficient query execution and data retrieval. Implementing data security and encryption best practices in AWS environments. Documenting data engineering processes, maintaining data pipeline infrastructure, and providing support as needed. Working closely with cross-functional teams including data scientists, analysts, and stakeholders to understand data requirements and deliver solutions. Technical and Functional Skills: Typically, a bachelors degree in Computer Science, Engineering, or a related field is required, along with 5+ years of experience in data engineering and AWS cloud environments. Strong experience with AWS data services such as S3, EC2, Redshift, Glue, Athena, EMR, etc Proficiency in programming languages commonly used in data engineering such as Python, SQL, Scala, or Java. Experience in designing, implementing, and optimizing data warehouse solutions on Snowflake/ Amazon Redshift. Familiarity with ETL tools and frameworks (e.g., Apache Airflow, AWS Glue) for building and managing data pipelines. Knowledge of database management systems (e.g., PostgreSQL, MySQL, Amazon Redshift) and data lake concepts. Understanding of big data technologies such as Hadoop, Spark, Kafka, etc., and their integration with AWS. Proficiency in version control tools like Git for managing code and infrastructure as code (e.g., CloudFormation, Terraform). Ability to analyze complex technical problems and propose effective solutions. Strong verbal and written communication skills for documenting processes and collaborating with team members and stakeholders.
Posted 1 month ago
2 - 5 years
4 - 8 Lacs
Pune
Work from Office
About The Role Process Manager - AWS Data Engineer Mumbai/Pune| Full-time (FT) | Technology Services Shift Timings - EMEA(1pm-9pm)|Management Level - PM| Travel Requirements - NA The ideal candidate must possess in-depth functional knowledge of the process area and apply it to operational scenarios to provide effective solutions. The role enables to identify discrepancies and propose optimal solutions by using a logical, systematic, and sequential methodology. It is vital to be open-minded towards inputs and views from team members and to effectively lead, control, and motivate groups towards company objects. Additionally, candidate must be self-directed, proactive, and seize every opportunity to meet internal and external customer needs and achieve customer satisfaction by effectively auditing processes, implementing best practices and process improvements, and utilizing the frameworks and tools available. Goals and thoughts must be clearly and concisely articulated and conveyed, verbally and in writing, to clients, colleagues, subordinates, and supervisors. Process Manager Roles and responsibilities: Understand clients requirement and provide effective and efficient solution in AWS using Snowflake. Assembling large, complex sets of data that meet non-functional and functional business requirements Using Snowflake / Redshift Architect and design to create data pipeline and consolidate data on data lake and Data warehouse. Demonstrated strength and experience in data modeling, ETL development and data warehousing concepts Understanding data pipelines and modern ways of automating data pipeline using cloud based Testing and clearly document implementations, so others can easily understand the requirements, implementation, and test conditions Perform data quality testing and assurance as a part of designing, building and implementing scalable data solutions in SQL Technical and Functional Skills: AWS ServicesStrong experience with AWS data services such as S3, EC2, Redshift, Glue, Athena, EMR, etc. Programming LanguagesProficiency in programming languages commonly used in data engineering such as Python, SQL, Scala, or Java. Data WarehousingExperience in designing, implementing, and optimizing data warehouse solutions on Snowflake/ Amazon Redshift. ETL ToolsFamiliarity with ETL tools and frameworks (e.g., Apache Airflow, AWS Glue) for building and managing data pipelines. Database ManagementKnowledge of database management systems (e.g., PostgreSQL, MySQL, Amazon Redshift) and data lake concepts. Big Data TechnologiesUnderstanding of big data technologies such as Hadoop, Spark, Kafka, etc., and their integration with AWS. Version ControlProficiency in version control tools like Git for managing code and infrastructure as code (e.g., CloudFormation, Terraform). Problem-solving Skills: Ability to analyze complex technical problems and propose effective solutions. Communication Skills: Strong verbal and written communication skills for documenting processes and collaborating with team members and stakeholders. Education and ExperienceTypically, a bachelors degree in Computer Science, Engineering, or a related field is required, along with 5+ years of experience in data engineering and AWS cloud environments. About eClerx eClerx is a global leader in productized services, bringing together people, technology and domain expertise to amplify business results. Our mission is to set the benchmark for client service and success in our industry. Our vision is to be the innovation partner of choice for technology, data analytics and process management services. Since our inception in 2000, we've partnered with top companies across various industries, including financial services, telecommunications, retail, and high-tech. Our innovative solutions and domain expertise help businesses optimize operations, improve efficiency, and drive growth. With over 18,000 employees worldwide, eClerx is dedicated to delivering excellence through smart automation and data-driven insights. At eClerx, we believe in nurturing talent and providing hands-on experience. About eClerx Technology eClerxs Technology Group collaboratively delivers Analytics, RPA, AI, and Machine Learning digital technologies that enable our consultants to help businesses thrive in a connected world. Our consultants and specialists partner with our global clients and colleagues to build and implement digital solutions through a broad spectrum of activities. To know more about us, visit https://eclerx.com eClerx is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability or protected veteran status, or any other legally protected basis, per applicable law
Posted 1 month ago
10 - 20 years
20 - 30 Lacs
Hyderabad
Remote
Note: Looking for Immediate Joiners and timings 5:30 pm - 1:30 am IST (Remote) Project Overview (If Possible): Its one of the workstreams of Project Acuity. Client Data Platform includes centralized web application for internal platform users across the Recruitment Business to support marketing and operational use cases. Building a database at the patient level will provide significant benefit to Client future reporting capabilities and engagement of external stakeholders. Role Scope / Deliverables: We are looking for an experienced AWS Data Engineer to join our dynamic team, responsible for developing, managing, and optimizing data architectures. The ideal candidate will have extensive experience in integrating large-scale datasets, building scalable and automated data pipelines. The candidate should also have experience with AWS ETL services (such as AWS Glue, Lambda, and Data Pipeline) to handle data processing and integration tasks effectively. Must Have Skills: Proficiency in programming languages such as Python, Scala, or similar. Strong experience in data classification, including the identification of PII data entities. Ability to leverage AWS services (e.g., SageMaker, Comprehend, Entity Resolution) to solve complex data related challenges. Strong analytical and problem-solving skills, with the ability to innovate and develop new approaches to data engineering Experience with AWS ETL services (such as AWS Glue, Lambda, and Data Pipeline) to handle data processing and integration tasks effectively. Experience in core AWS Services including AWS IAM, VPC, EC2, S3, RDS, Lambda, CloudWatch, CloudTrail. Nice to Have skills: Experience with data privacy and compliance requirements, especially related to PII data. Familiarity with advanced data indexing techniques, vector databases, and other technologies that improve the quality of outputs.
Posted 1 month ago
11 - 20 years
20 - 35 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level 5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 11 - 20 Yrs Location- Pan India Job Description : - Minimum 2 Years hands on experience in Solution Architect ( AWS Databricks ) If interested please forward your updated resume to sankarspstaffings@gmail.com With Regards, Sankar G Sr. Executive - IT Recruitment
Posted 1 month ago
12 - 18 years
20 - 35 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Role & responsibilities Job Description: Cloud DataInformation Architect Core skillset with implementing Cloud data pipelines Tools AWS Databricks Snowflake Python Fivetran Requirements Candidate must be experienced working in projects involving AWS Databricks Python AWS Native data Architecture and services like S3 lamda Glue EMR Databricks Spark Experience with handing AWS Cloud platform Responsibilities Identify define foundational business data domain data domain elements Identifyingcollaborating with data product and stewards in business circles to capture data definitions Driving data sourceLineage report Reference data needs identification Recommending data extraction and replication patterns Experience on data migration from big data to AWS Cloud on S3 Snowflake Redshift Understands where to obtain information needed to make the appropriate decisions Demonstrates ability to break down a problem to manageable pieces and implement effective timely solutions Identifies the problem versus the symptom Manages problems that require involvement of others to solve Reaches sound decisions quickly Carefully evaluates alternative risks and solutions before taking action Optimizes the use of all available resources Develops solutions to meet business needs that reflect a clear understanding of the objectives practices and procedures of the corporation department and business unit Skills Hands on experience on AWS databricks especially S3 Snowflake python Experience on Shell scripting Exceptionally strong analytical and problem solving skills Relevant experience with ETL methods and with retrieving data from dimensional data models and data warehouses Strong experience with relational databases and data access methods especially SQL Excellent collaboration and crossfunctional leadership skills Excellent communication skills both written and verbal Ability to manage multiple initiatives and priorities in a fastpaced collaborative environment Ability to leverage data assets to respond to complex questions that require timely answers Has working knowledge on migrating relational and dimensional databases on AWS Cloud platform
Posted 1 month ago
8 - 12 years
20 - 25 Lacs
Gandhinagar
Remote
Requirement : 8+ years of professional experience as a data engineer and 2+ years of professional experience as a senior data engineer Must have strong working experience in Python and its various data analysis packages Pandas / NumPy Must have strong understanding of prevalent cloud ecosystems and experience in one of the cloud platforms AWS / Azure / GCP . Must have strong working experience in one of the leading MPP Databases Snowflake / Amazon Redshift / Azure Synapse / Google Big Query Must have strong working experience in one of the leading data orchestration tools in cloud – Azure Data Factory / Amazon Glue / Apache Airflow Must have experience working with Agile methodologies, Test Driven Development, and implementing CI/CD pipelines using one of leading services – GITLab / Azure DevOps / Jenkins / AWS Code Pipeline / Google Cloud Build Must have Data Governance / Data Management / Data Quality project implementation experience Must have experience in big data processing using Spark Must have strong experience with SQL databases (SQL Server, Oracle, Postgres etc.) Must have stakeholder management experience and very good communication skills Must have working experience on end-to-end project delivery including requirement gathering, design, development, testing, deployment, and warranty support Must have working experience with various testing levels, such as, unit testing, integration testing and system testing Working experience with large, heterogeneous datasets in building and optimizing data pipelines, pipeline architectures Nice to have Skills : Working experience in DataBricks notebooks and managing DataBricks clusters Experience in Data Modelling tool such as Erwin or ER Studio Experience in one of the data architectures, such as Data Mesh or Data Fabric Has handled real time data or near real time data Experience in one of the leading Reporting & analysis tools, such as Power BI, Qlik, Tableau or Amazon Quick Sight Working experience with API integration General insurance / banking / finance domain understanding
Posted 1 month ago
3 - 5 years
7 - 11 Lacs
Gurugram
Remote
Groundtruth looking for DevOps Engineer who can join us within 30 Days You will: Increase velocity of engineering teams by creating/deploying new stacks, services, and automations Work on projects to improve tooling, efficiency, and standardize/automate approaches (DRY) for commonly-used stacks/services Manage user access to services/systems via tools such as AWS IAM, terraform, and saltstack Participate in on-call rotation to handle critical and/or service-impacting issues Seek pragmatic opportunities to improve our infrastructure, processes, and operational activities Plan, provision, operate, and monitor cloud infrastructure for multiple areas of the business that you support. Design and assist with development and integration of monitoring dashboards, alerting solutions, and devops tools. Collaborate with Software Engineering to plan feature releases and to monitor and support applications including cost analysis and controls. Respond to system, application, security, and customer incidents conducting cause and impact analysis. Participate in on-call support rotation You have: This is our ideal wish list, but most people dont check every box on every job description. So, if you meet most of the criteria below and are excited about the opportunity, and willing to learn, wed love to hear from you. working in a DevOps roles supporting Engineering teams 4 year degree in Computer Science or related field and 3+ years of experience in software engineering OR 6+ years of experience in software development with no degree Experience working with multiple AWS technologies including IAM, EC2, ECS, S3, RDS, EMR, Glue, or similar Experience working for a geographically distributed company Knowledge of CI/CD tools and integration along with container and other microservice-related technologies Proficiency with Github, Github Actions, AWS CLI, and troubleshooting web services and distributed systems Experience in one or more of the following: Python, Bash/Shell, Go, Terraform (or other IaC tools) Experience with automation tools (Saltstack, Chef, Ansible) Experience with IaC tools (e.g. Terraform) Experience working with cloud (AWS, Azure, GCP) preferably with multi-region tenancy Experience with linux administration Experience with shell scripting/cron Nice to have Python3 coding experience (or similar) automation of cloud deployments/infra mgmt. experience with containerization (docker, kubernetes, etc) experience with networking set up (on prem or virtual) experience with monitoring/alerting tools (e.g. cloudwatch alarms, graphite, prometheus, etc) What we offer At GroundTruth, we want our employees to be comfortable with their benefits so they can focus on doing the work they love. Parental leave- Maternity and Paternity Flexible Time Offs (Earned Leaves, Sick Leaves, Birthday leave, Bereavement leave & Company Holidays) In Office Daily Catered Lunch Fully stocked snacks/beverages Health cover for any hospitalization. Covers both nuclear family and parents Tele-med for free doctor consultation, discounts on health checkups and medicines Wellness/Gym Reimbursement Pet Expense Reimbursement Childcare Expenses and reimbursements Employee referral program Education reimbursement program Skill development program Cell phone reimbursement (Mobile Subsidy program). Internet reimbursement/Postpaid cell phone bill/or both. Birthday treat reimbursement Employee Provident Fund Scheme offering different tax saving options such as Voluntary Provident Fund and employee and employer contribution up to 12% Basic Creche reimbursement Co-working space reimbursement National Pension System employer match Meal card for tax benefit Special benefits on salary account Interested one share update resume at laxmi.pal@groundtruth.com or if you are immediate joiner and having relevant experience please connect on 9220900537
Posted 1 month ago
4 - 9 years
12 - 16 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Role & responsibilities Urgent Hiring for one of the reputed MNC Immediate Joiners Only Females Exp - 4-9 Years Bang / Hyd / Pune As a Python Developer with AWS , you will be responsible for developing cloud-based applications, building data pipelines, and integrating with various AWS services. You will work closely with DevOps, Data Engineering, and Product teams to design and deploy solutions that are scalable, resilient, and efficient in an AWS cloud environment. Key Responsibilities: Python Development : Design, develop, and maintain applications and services using Python in a cloud environment. AWS Cloud Services : Leverage AWS services such as EC2 , S3 , Lambda , RDS , DynamoDB , and API Gateway to build scalable solutions. Data Pipelines : Develop and maintain data pipelines, including integrating data from various sources into AWS-based storage solutions. API Integration : Design and integrate RESTful APIs for application communication and data exchange. Cloud Optimization : Monitor and optimize cloud resources for cost efficiency, performance, and security. Automation : Automate workflows and deployment processes using AWS Lambda , CloudFormation , and other automation tools. Security & Compliance : Implement security best practices (e.g., IAM roles, encryption) to protect data and maintain compliance within the cloud environment. Collaboration : Work with DevOps, Cloud Engineers, and other developers to ensure seamless deployment and integration of applications. Continuous Improvement : Participate in the continuous improvement of development processes and deployment practices. Required Qualifications: Python Expertise : Strong experience in Python programming, including using libraries like Pandas , NumPy , Boto3 (AWS SDK for Python), and frameworks like Flask or Django . AWS Knowledge : Hands-on experience with AWS services such as S3 , EC2 , Lambda , RDS , DynamoDB , CloudFormation , and API Gateway . Cloud Infrastructure : Experience in designing, deploying, and maintaining cloud-based applications using AWS. API Development : Experience in designing and developing RESTful APIs, integrating with external services, and managing data exchanges. Automation & Scripting : Experience with automation tools and scripts (e.g., using AWS Lambda , Boto3 , CloudFormation ). Version Control : Proficiency with version control tools such as Git . CI/CD Pipelines : Experience building and maintaining CI/CD pipelines for cloud-based applications. Preferred candidate profile Familiarity with serverless architectures using AWS Lambda and other AWS serverless services. AWS Certification (e.g., AWS Certified Developer Associate , AWS Certified Solutions Architect Associate ) is a plus. Knowledge of containerization tools like Docker and orchestration platforms such as Kubernetes . Experience with Infrastructure as Code (IaC) tools such as Terraform or AWS CloudFormation .
Posted 1 month ago
6 - 11 years
15 - 30 Lacs
Bengaluru, Hyderabad, Gurgaon
Work from Office
Were Hiring: Sr. AWS Data Engineer – GSPANN Technologies Locations: Bangalore, Pune, Hyderabad, Gurugram Experience: 6+ Years | Immediate Joiners Only Looking for experts in: AWS Services: Glue, Redshift, S3, Lambda, Athena Big Data: Spark, Hadoop, Kafka Languages: Python, SQL, Scala ETL & Data Engineering Apply now: heena.ruchwani@gspann.com #AWSDataEngineer #HiringNow #DataEngineering #GSPANN
Posted 1 month ago
6 - 9 years
8 - 13 Lacs
Chennai, Mumbai
Work from Office
About the Role: Grade Level (for internal use): 10 S&P Dow Jones Indices The Role : S&P Dow Jones Indices a global leader in providing investable and benchmark indices to the financial markets, is looking for a Java Application Developer to join our technology team. The Location : Mumbai/Hyderabad/Chennai The Team : You will be part of global technology team comprising of Dev, QA and BA teams and will be responsible for analysis, design, development and testing. The Impact : You will be working on one of the core technology platforms responsible for the end of day calculation as well as dissemination of index values. Whats in it for you : You will have the opportunity to work on the enhancements to the existing index calculation system as well as implement new methodologies as required. Responsibilities : Design and development of Java applications for SPDJI web sites and its feeder systems. Participate in multiple software development processes including Coding, Testing, De-bugging & Documentation. Develop software applications based on clear business specifications. Work on new initiatives and support existing Index applications. Perform Application & System Performance tuning and troubleshoot performance issues. Develop web based applications and build rich front-end user interfaces. Build applications with object oriented concepts and apply design patterns. Integrate in-house applications with various vendor software platforms. Setup development environment / sandbox for application development. Check-in application code changes into the source repository. Perform unit testing of application code and fix errors. Interface with databases to extract information and build reports. Effectively interact with customers, business users and IT staff. What were looking for : Basic Qualification : Bachelor's degree in Computer Science, Information Systems or Engineering is required, or in lieu, a demonstrated equivalence in work experience. (6 to 9) years of IT experience in application development and support. Strong Experience with Java, J2EE, JMS &.EJBs Advanced SQL & basic PL/SQL programming Basic networking knowledge / Unix scripting Exposure to UI technologies like react JS Basic understanding of AWS cloud (EC2, EMR, Lambda, S3, Glue, etc.) Excellent communication and interpersonal skills are essential, with strong verbal and writing proficiencies. Preferred Qualification : Experience working with large datasets in Equity, Commodities, Forex, Futures and Options asset classes. Experience with Index/Benchmarks or Asset Management or Trading platforms. Basic Knowledge of User Interface design & development using JQuery, HTML5 & CSS.
Posted 2 months ago
2 - 5 years
4 - 7 Lacs
Pune
Work from Office
As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Design and Develop Data Solutions, Design and implement efficient data processing pipelines using AWS services like AWS Glue, AWS Lambda, Amazon S3, and Amazon Redshift. Develop and manage ETL (Extract, Transform, Load) workflows to clean, transform, and load data into structured and unstructured storage systems. Build scalable data models and storage solutions in Amazon Redshift, DynamoDB, and other AWS services. Data Integration: Integrate data from multiple sources including relational databases, third-party APIs, and internal systems to create a unified data ecosystem. Work with data engineers to optimize data workflows and ensure data consistency, reliability, and performance. Automation and Optimization: Automate data pipeline processes to ensure efficiency Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing
Posted 2 months ago
2 - 5 years
4 - 7 Lacs
Pune
Work from Office
As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Design and Develop Data Solutions, Design and implement efficient data processing pipelines using AWS services like AWS Glue, AWS Lambda, Amazon S3, and Amazon Redshift. Develop and manage ETL (Extract, Transform, Load) workflows to clean, transform, and load data into structured and unstructured storage systems. Build scalable data models and storage solutions in Amazon Redshift, DynamoDB, and other AWS services. Data Integration: Integrate data from multiple sources including relational databases, third-party APIs, and internal systems to create a unified data ecosystem. Work with data engineers to optimize data workflows and ensure data consistency, reliability, and performance. Automation and Optimization: Automate data pipeline processes to ensure efficiency Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing
Posted 2 months ago
4 - 6 years
6 - 8 Lacs
Pune
Work from Office
As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Design and Develop Data Solutions, Design and implement efficient data processing pipelines using AWS services like AWS Glue, AWS Lambda, Amazon S3, and Amazon Redshift. Develop and manage ETL (Extract, Transform, Load) workflows to clean, transform, and load data into structured and unstructured storage systems. Build scalable data models and storage solutions in Amazon Redshift, DynamoDB, and other AWS services. Data Integration: Integrate data from multiple sources including relational databases, third-party APIs, and internal systems to create a unified data ecosystem. Work with data engineers to optimize data workflows and ensure data consistency, reliability, and performance. Automation and Optimization: Automate data pipeline processes to ensure efficiency Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing
Posted 2 months ago
6 - 10 years
20 - 32 Lacs
Chennai, Hyderabad, Noida
Hybrid
Role & responsibilities Primary Skill: IICS & AWS Glue De 1) Develop IICS/Informatica jobs based on requirements. 2) 5+years in Glue development 3) Manage L3 issues for existing IICS/Informatica applications 4) Enhance existing IICS/Informatica applications for performance or business improvement 5) Create Tableau Dashboards that interact with various data sources based on requirements Preferred candidate profile: Immediate joiners Location: Hyderabad,chennai,Noida,Pune
Posted 2 months ago
2 - 6 years
12 - 16 Lacs
Pune
Work from Office
As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Design and Develop Data Solutions, Design and implement efficient data processing pipelines using AWS services like AWS Glue, AWS Lambda, Amazon S3, and Amazon Redshift. Develop and manage ETL (Extract, Transform, Load) workflows to clean, transform, and load data into structured and unstructured storage systems. Build scalable data models and storage solutions in Amazon Redshift, DynamoDB, and other AWS services. Data IntegrationIntegrate data from multiple sources including relational databases, third-party APIs, and internal systems to create a unified data ecosystem. Work with data engineers to optimize data workflows and ensure data consistency, reliability, and performance. Automation and OptimizationAutomate data pipeline processes to ensure efficiency Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing
Posted 2 months ago
5 - 8 years
13 - 23 Lacs
Pune, Hyderabad
Hybrid
Greetings of the Day !! We at Tech Mahindra are Hiring for Skilled Python Developer Below is the Detailed Job Description for the same: Job Title: Senior Python Developer Experience: 5 to 8 Years Location: Hyderabad and Pune JD- Python - Need AWS Need Serverless - Need - Specific services. Lambda Glue Athena IaC/CDK - Need Unit testing - Need Documentation skills(Need to document how things were done for others - Need Data Integration Concepts - Need Data Validation Performance(runtime) Error handling/recovery Mindset Learning Agility Quickly adapts to new or changing demands Remains open to new ideas and approaches to work Persistence Works independently when needed to achieve results Demonstrates persistence in the face of roadblocks Stays focused under pressure Service Orientation Works to identify the underlying causes of complex problems Works to identify ideal solutions Communicates complex ideas effectively to others
Posted 2 months ago
2 - 6 years
4 - 8 Lacs
Pune
Work from Office
As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Design and Develop Data Solutions, Design and implement efficient data processing pipelines using AWS services like AWS Glue, AWS Lambda, Amazon S3, and Amazon Redshift. Develop and manage ETL (Extract, Transform, Load) workflows to clean, transform, and load data into structured and unstructured storage systems. Build scalable data models and storage solutions in Amazon Redshift, DynamoDB, and other AWS services. Data Integration: Integrate data from multiple sources including relational databases, third-party APIs, and internal systems to create a unified data ecosystem. Work with data engineers to optimize data workflows and ensure data consistency, reliability, and performance. Automation and Optimization: Automate data pipeline processes to ensure efficiency Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing
Posted 2 months ago
3 - 5 years
6 - 15 Lacs
Pune
Work from Office
Sr. Software Engineer with advanced Python for product development and of ML & Generative AI. Hands on with FastAPI server in production environment. AI Engineers to design and develop high-quality Generative AI platform on AWS.
Posted 2 months ago
8 - 12 years
27 - 32 Lacs
Kochi
Work from Office
As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs.Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Developed the Pysprk code for AWS Glue jobs and for EMR.. Worked on scalable distributed data system using Hadoop ecosystem in AWS EMR, MapR distribution.. Developed Python and pyspark programs for data analysis.. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Hadoop streaming Jobs using python for integrating python API supported applications.. Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations. Re - write some Hive queries to Spark SQL to reduce the overall batch time Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala
Posted 2 months ago
5 - 10 years
5 - 15 Lacs
Bengaluru, Bangalore Rural
Hybrid
Python development, backend experience. Strong knowledge of AWS services (Glue, Lambda, DynamoDB, S3, PySpark). Excellent debugging skills to resolve production issues. Experience with MySQL, NoSQL databases.
Posted 2 months ago
4 - 9 years
12 - 16 Lacs
Kochi
Work from Office
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 5 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on AWS; Exposure to streaming solutions and message brokers like Kafka technologies. Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
In recent years, the demand for professionals with expertise in glue technologies has been on the rise in India. Glue jobs involve working with tools and platforms that help connect various systems and applications together seamlessly. This article aims to provide an overview of the glue job market in India, including top hiring locations, average salary ranges, career progression, related skills, and interview questions for aspiring job seekers.
Here are 5 major cities in India actively hiring for glue roles: 1. Bangalore 2. Pune 3. Hyderabad 4. Chennai 5. Mumbai
The estimated salary range for glue professionals in India varies based on experience levels. Entry-level professionals can expect to earn around INR 4-6 lakhs per annum, while experienced professionals with several years of experience can earn between INR 12-18 lakhs per annum.
In the field of glue technologies, a typical career progression may include roles such as: - Junior Developer - Senior Developer - Tech Lead - Architect
Apart from expertise in glue technologies, professionals in this field are often expected to have or develop skills in: - Data integration - ETL (Extract, Transform, Load) processes - Database management - Programming languages (e.g., Python, Java)
Here are 25 interview questions for glue roles: - What is Glue in the context of data integration? (basic) - Explain the difference between ETL and ELT. (basic) - How would you handle data quality issues in a glue job? (medium) - Can you explain how Glue works with Apache Spark? (medium) - What is the significance of schema evolution in Glue? (medium) - How do you optimize Glue jobs for performance? (medium) - Describe a scenario where you had to troubleshoot a failed Glue job. (medium) - What is a bookmark in Glue and how is it used? (medium) - How does Glue handle schema inference? (medium) - Have you worked with AWS Glue DataBrew? If so, explain your experience. (medium) - Explain how Glue handles schema evolution. (advanced) - How does Glue support job bookmarks for incremental processing? (advanced) - What are the differences between Glue ETL and Glue DataBrew? (advanced) - How do you handle nested JSON structures in Glue transformations? (advanced) - Explain a complex Glue job you have designed and implemented. (advanced) - How does Glue handle dynamic frame operations? (advanced) - What is the role of a Glue DynamicFrame in data transformation? (advanced) - How do you handle schema changes in Glue jobs? (advanced) - Explain how Glue can be integrated with other AWS services. (advanced) - What are the limitations of Glue that you have encountered in your projects? (advanced) - How do you monitor and debug Glue jobs in production environments? (advanced) - Describe your experience with Glue job scheduling and orchestration. (advanced) - How do you ensure security in Glue jobs that handle sensitive data? (advanced) - Explain the concept of lazy evaluation in Glue. (advanced) - How do you handle dependencies between Glue jobs in a workflow? (advanced)
As you prepare for interviews and explore opportunities in the glue job market in India, remember to showcase your expertise in glue technologies, related skills, and problem-solving abilities. With the right preparation and confidence, you can land a rewarding career in this dynamic and growing field. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2