Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
2 - 7 years
3 - 6 Lacs
Mumbai
Work from Office
1. Good Knowledge of S3. Manager 2. DataLake Concepts and Performance Optimizations in DataLake 3. DataWareHouse Concepts and Amazon Redshift. 4. Athena and Redshift Spectrum . 5. Strong Understanding of Glue Concepts , Glue Data catalogue . Experienced in Implementing END to END ETL solutions using AWS GLUE with variety of source and Target systems. 6 . Must be very strong in Pyspark. Must be able to implement all Standard and Complex ETL Transformations using Pyspark. Should be able to perform various performance Optimization techniques using Spark and Spark SQL. 7. Good knowledge of SQL is a MUST. SHould be able to implement all Standard Data Transformations using SQL. Also should be able to analyze data stored in Redshift datarwarehouse and Datalakes. 8. Must have good understanding of Athena and Redshift Spectrum. 9. Understanding of RDS 10. Understanding of DataBase Migration Service and experience in Migrating from Diverse databases. 11. Understanding of writing Lambda functions and Layers for connecting to various services.Sr. Associate 12. Understanding of Cloud Watch , Cliud Watch Events , Event bridge and also some orchestration tools in AWS like Step Functions and Apache Airflow.
Posted 3 months ago
4 - 6 years
6 - 10 Lacs
Pune
Work from Office
As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Design and Develop Data Solutions, Design and implement efficient data processing pipelines using AWS services like AWS Glue, AWS Lambda, Amazon S3, and Amazon Redshift. Develop and manage ETL (Extract, Transform, Load) workflows to clean, transform, and load data into structured and unstructured storage systems. Build scalable data models and storage solutions in Amazon Redshift, DynamoDB, and other AWS services. Data Integration: Integrate data from multiple sources including relational databases, third-party APIs, and internal systems to create a unified data ecosystem. Work with data engineers to optimize data workflows and ensure data consistency, reliability, and performance. Automation and Optimization: Automate data pipeline processes to ensure efficiency Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing.
Posted 3 months ago
12 - 17 years
35 - 60 Lacs
Chennai, Bengaluru
Hybrid
At ZoomInfo, we encourage creativity, value innovation, demand teamwork, expect accountability and cherish results. We value your take charge, take initiative, get stuff done attitude and will help you unlock your growth potential. One great choice can change everything. Thrive with us at ZoomInfo. Zoominfo is a rapidly growing data-driven company, and as such- we understand the importance of a comprehensive and solid data solution to support decision making in our organization. Our vision is to have a consistent, democratized, and accessible single source of truth for all company data analytics and reporting. Our goal is to improve decision-making processes by having the right information available when it is needed. As a Principal Software Engineer in our Data Platform infrastructure team you'll have a key role in building and designing the strategy of our Enterprise Data Engineering group. What You'll do: Design and build a highly scalable data platform to support data pipelines for diversified and complex data flows. Track and identify relevant new technologies in the market and push their implementation into our pipelines through research and POC activities. Deliver scalable, reliable and reusable data solutions. Leading, building and continuously improving our data gathering, modeling, reporting capabilities and self-service data platforms. Working closely with Data Engineers, Data Analysts, Data Scientists, Product Owners, and Domain Experts to identify data needs. Develop processes and tools to monitor, analyze, maintain and improve data operation, performance and usability. What you bring: Relevant Bachelor degree or other equivalent Software Engineering background. 12+ years of experience as an infrastructure / data platform / big data software engineer. Experience with AWS/GCP cloud services such as GCS/S3, Lambda/Cloud Function, EMR/Dataproc, Glue/Dataflow, Athena. IaC design and hands-on experience. Familiarity designing CI/CD pipelines with Jenkins, Github Actions, or similar tools. Experience in designing, building and maintaining enterprise systems in a big data environment on public cloud. Strong SQL abilities and hands-on experience with SQL, performing analysis and performance optimizations. Hands-on experience in Python or equivalent programming language. Experience with administering data warehouse solutions (like Bigquery/ Redshift/ Snowflake). Experience with data modeling, data catalog concepts, data formats, data pipelines/ETL design, implementation and maintenance. Experience with Airflow and DBT - advantage Experience with Kubernetes using GKE or EKS - advantage.. Experience with development practices Agile, TDD - advantage
Posted 3 months ago
3 - 6 years
8 - 15 Lacs
Chennai
Remote
Dear Candidate Greetings of the day!! My name is Amutha Valli, and I'm reaching out to you regarding an exciting opportunity with TechMango. You can connect with me on Email: amutha.m@techmango.net or LinkedIn https://www.linkedin.com/in/amutha-valli-32611b289/ Job Title: BI Engineer Location: Remote Experience: 3+ years Mandatory Skills: AWS, BI, Quicksight, Python, ETL, SQL Job Description: We are seeking a highly skilled and experienced BI Engineer for AWS to join our team and play a crucial role in designing and implementing data analytics solutions on the Amazon Web Services (AWS) platform. The ideal candidate will have a deep understanding of data analytics technologies, AWS services, and a track record of architecting scalable and efficient data solutions to address complex business challenges. Responsibilities: Design, develop, and maintain BI solutions on AWS, including data pipelines, data warehouses, and reporting dashboards. Collaborate with business stakeholders to understand BI requirements and translate them into technical solutions. Assisting in develop and maintain ETL processes to extract, transform, and load data from various sources into AWS services such as S3, Redshift, or RDS. Design and implement scalable data models optimized for reporting and analytics needs. Create and optimize SQL queries and scripts for data manipulation, transformation, and Familiarity with programming/scripting languages like Python, SQL, and experience with data manipulation and transformation. Must be experience in Quicksight. Excellent problem-solving skills with the ability to design and implement creative solutions for complex data challenges. Strong communication skills to effectively collaborate with technical and non- technical teams. Experience in working with Agile methodologies and version control systems. Proven ability to work independently and manage multiple priorities effectively.
Posted 3 months ago
5 - 10 years
18 - 30 Lacs
Hyderabad
Work from Office
Job Overview As a Data Engineer, you will be responsible for designing, developing, and optimizing data pipelines and workflows that handle large volumes of data. You will work closely with our data science and analytics teams to ensure data is readily available for analysis and reporting. This role requires expertise in AWS services, particularly Kinesis, Glue, Lambda, Step Functions, and Redshift. Additionally, the ideal candidate should have strong SQL skills and experience with one of the following programming languages, with a preference for Node.js. Role & responsibilities Design, develop, and maintain data pipelines: Create scalable and efficient data pipelines using AWS services like Kinesis, Glue, Lambda, Step Functions, and Redshift. Data integration: Integrate data from various sources into a centralized data warehouse, ensuring data consistency, quality, and security. ETL processes: Develop and manage ETL processes using AWS Glue and other relevant tools to transform and load data into Redshift or other databases. Real-time data processing: Implement real-time data processing solutions using AWS Kinesis and Lambda to handle streaming data. Automation: Automate data workflows and processes using AWS Step Functions and other orchestration tools. Performance optimization: Optimize SQL queries and database performance for efficient data retrieval and reporting. Collaboration: Work closely with data scientists, analysts, and other stakeholders to understand data requirements and deliver data solutions that meet their needs. Data quality and governance: Ensure high standards of data quality and implement data governance best practices across all data pipelines. Documentation: Maintain clear and comprehensive data dictionary along with documentation of data pipelines, architectures, and processes. Preferred candidate profile 4+ years of experience in data engineering or a related field. Hands-on experience with AWS services, specifically Kinesis, Glue, Lambda, Step Functions, and Redshift. Expert-level SQL knowledge, with a proven track record of writing complex queries and optimizing database performance. Proficiency in a programming language, like python or scala or Node.js. Strong understanding of ETL processes, data warehousing concepts, and real-time data processing. Experience with data modelling, schema design, and database optimization. Ability to work with large datasets and troubleshoot data issues. Familiarity with data governance, data quality, and security best practices. Excellent problem-solving skills and attention to detail. Strong communication skills, with the ability to explain technical concepts to non-technical stakeholders. Ability to work independently and as part of a team in a fast-paced environment.
Posted 3 months ago
12 - 22 years
30 - 45 Lacs
Pune, Bengaluru, Hyderabad
Work from Office
Job opportunity | Big Data Architect | MNC Required years of Experience 12+ years Job Locations Bangalore, Hyderabad, Trivandrum, Gurugram ,Kochi, Pune Shift, if any overlap with UK timings ( 2-11 PM IST) Detailed JD will use the below Required Skills &- Qualifications: 5+ years of experience in Big Data Engineering. Strong expertise in AWS services like DMS, Kinesis, Athena, Glue, Lambda, S3, EMR. Hands-on experience with Spark optimizations for performance improvements. Proficiency in SQL and Oracle query tuning for high-performance data retrieval. Experience in Big Data frameworks (Hadoop, Spark, Hive, Presto, Athena). Good understanding of Kafka/Debezium for data streaming. Exposure to CI/CD automation and AWS DevOps tools. Strong problem-solving and troubleshooting skills.
Posted 3 months ago
7 - 12 years
10 - 15 Lacs
Hyderabad
Work from Office
US shift till 11 PM IST Primary Skills data architecture for all AWS data services ETL processes to load data into the data warehouse Athena, Python code, Glue Lambda, DMS , RDS, Redshift Cloud Formation Write SQL queries to support data analysis and reporting reports and dashboards to visualize data Seeking a developer who has good Experience in Athena, Python code, Glue, Lambda, DMS , RDS, Redshift Cloud Formation and other AWS serverless resources. Can optimize data models for performance and efficiency. Able to write SQL queries to support data analysis and reporting Responsibility Design, implement, and maintain the data architecture for all AWS data services Work with stakeholders to identify business needs and requirements for data-related projects Design and implement ETL processes to load data into the data warehouse Good Experience in Athena, Python code, Glue, Lambda, DMS , RDS, Redshift Cloud Formation and othe
Posted 3 months ago
8 - 13 years
4 - 8 Lacs
Bengaluru
Work from Office
Seeking a developer who has good Experience in Athena, Python code, Glue, Lambda, DMS , RDS, Redshift Cloud Formation and other AWS serverless resources. Can optimize data models for performance and efficiency. Able to write SQL queries to support data analysis and reporting Design, implement, and maintain the data architecture for all AWS data services. Work with stakeholders to identify business needs and requirements for data-related projects Design and implement ETL processes to load data into the data warehouse Good Experience in Athena, Python code, Glue, Lambda, DMS , RDS, Redshift Cloud Formation and other AWS serverless resources
Posted 3 months ago
8 - 13 years
30 - 45 Lacs
Hyderabad
Work from Office
As a Data Engineer in Data Infrastructure and Strategy group, you will play a key role in transforming the way Operations Finance teams access and analyse the data. You will work to advance 3 Year Data Infrastructure Modernisation strategy and play a key role in adopting and expanding a unified data access platform and scalable governance and observability frameworks that follow modern data architecture and cloud-first designs. Your responsibilities will include supporting and migrating data analytics use cases of a targeted customer group and new features implementations within the central platform: component design, implementation using NAWS services that follow best engineering practices, user acceptance testing, launch, adoption and post-launch support. You will work on system design and integrate new components to established architecture. You will be engaged into cross-team collaboration by building reusable design patterns and components and adopting designs adopted by others. You will contribute to buy vs “build” decision by evaluating latest product and features releases for NAWS and internal products, perform gap analysis and define feasibility of their adoption and the list of blockers. The ideal candidate possess a track record of creating efficient AWS-based data solutions; data models for both relational databases and Glue/Athena/EMR stack; developing solution documentation, project plans, user guides and other project documentation. We are looking into individual contributor inspired to become data systems architects. Track of production level deliverables leveraging GenAI is big plus. Key job responsibilities * Elevate and optimize existing solutions while driving strategic migration. Conduct thorough impact assessments to identify opportunities for transformative re-architecture or migration to central platforms. Your insights will shape the technology roadmap, ensuring we make progress towards deprecation goals while providing best customer service; * Design, review and implement data solutions that support WW Operations Finance standardisation and automation initiatives using AWS technologies and internally built tools, including Spark/EMR, Redshift, Athena, DynamoDB, Lambda, S3, Glue, Lake Formation, etc; * Support data solutions adoption by both, finance and technical teams; identify and remove adoption blockers; * Ensure the speed of delivery and high-quality: iteratively improve development process and adopt mechanisms for optimisation of the development and support; * Contribute into engineering excellence by reviewing design and code created by others; * Contribute to delivery execution, planning, operational excellence, retrospectives, problem identification, and solution proposals; * Collaborate with finance analysts, engineers, product and program managers and external teams to influence and optimize the value of the delivery in data platform; * Create technical and customer-facing documentation on the products within the platform. A day in the life You work with the Engineering, Product, BI and Operations teams to elevate existing data platforms and implement best-of-class data solutions for Operations Finance organization. You solve unstructured customer pain points with technical solutions, you are focused on users productivity when working with the data. You participate in discussions with stakeholders to provide updates on project progress, gather feedback, and align on priorities. Utilizing AWS CDK and various AWS services, you design, execute, and deploy solutions. Your broader focus is on system architecture rather than individual pipelines. You regularly review your designs with Principal Engineer, incorporate gathered insights. Conscious of your impact on customers and infrastructure, you establish efficient development and change management processes to guarantee speed, quality and scalability of delivered solution. About the team Operations Finance Standardization and Automation improves customer experience and business outcomes across Amazon Operations Finance through innovative technical solutions, standardization and automation of processes and use of modern data analytics technologies. Basic qualifications MS or BS in Computer Science, Electrical Engineering, or similar fields; Strong AWS engineering background, 3+ years of demonstrated track record designing and operating data solutions in Native AWS. The right person will be highly technical and analytical with ability to drive technical execution towards organization goals; Exceptional triaging and bug fixing skills. Ability to assess risks, implement fix without customer impact; Strong data modelling experience, 3+ years of data modeling practice is required. Expertise in designing both analytical and operational data models is a must. Candidate needs to demonstrate the working knowledge of trade-offs in data model designs and platform-specific considerations with concentration in Redshift, MySQL, EMR/Spark and Athena; Excellent knowledge of modern data architecture concepts - data lakes, data lakehouses, - as well as governance practices; Strong documentation skills, proven ability to adapt the document to the audience. The ability to communicate the information on levels ranging from executive summaries and strategy addendums to detailed design specifications is critical to the success; Excellent communication skills, both written and oral. Ability to communicate technical complexity to a wide range of stakeholders. Preferred qualifications Data Governance frameworks experience; Compliance frameworks experience, SOX preferred; Familiarity or production level experience with AI-based AWS offerings (Bedrock) is a plus.
Posted 3 months ago
3 - 8 years
5 - 10 Lacs
Chennai
Work from Office
As a Production Support Engineer at Chola MS General Insurance, one should be responsible for Supporting, maintaining, documenting, expanding, and optimizing our Data Lake, Data warehouse, Data pipeline, and Data products. Required Candidate profile Should have a min 6+ yrs in Data Engg. Data Analytics platform Conduct root-cause analysis as & when needed & propose a corrective action plan Follow established set of processes while handling issues
Posted 3 months ago
6 - 8 years
15 - 20 Lacs
Chennai, Pune
Work from Office
Role: Senior Cloud Data Engineer Location: Pune/Chennai Experience: 6 to 8 Years What awaits you/ Job Profile You will design, develop, and optimize large-scale data pipelines. You will take ownership of critical components of the data architecture, ensuring performance, security, and compliance. Design and implement scalable data pipelines for batch and real-time processing. Optimize data storage and computing resources to improve cost and performance. Ensure data security and compliance with industry regulations. Collaborate with data scientists, analysts, and application teams to align data storage strategies. Lead technical discussions with stakeholders to deliver the best possible solutions. Automate data workflows and develop reusable frameworks. Monitor and troubleshoot ETL pipelines, jobs, and cloud services. What should you bring along 6+ years of experience in AWS cloud services and data engineering. Strong expertise in data modeling, ETL pipeline design, and SQL query optimization. Prior experience working with streaming solutions like Kafka. Excellent knowledge of Terraform and GitHub Actions. Experienced in being in the lead of a feature team. Must have technical skill Python, PySpark, Hive, Unix AWS (S3, Lambda, Glue, Athena, RDS, Step Functions, SNS, SQS, API Gateway) MySQL, Oracle, NoSQL and experience in writing complex queries Good to have technical skills Cloud Architect Certification. Terraform Git, GitHub Actions, Jenkins Delta Lake, Iceberg Experience in AI-powered data solutions
Posted 3 months ago
4 - 9 years
30 - 35 Lacs
Bengaluru
Hybrid
We are seeking an AWS Data Engineer with 4 + years of experience to design, build, and maintain scalable data pipelines and cloud-based data solutions using AWS services like Glue, Lambda, S3, Redshift, and DynamoDB.
Posted 3 months ago
4 - 9 years
20 - 25 Lacs
Bengaluru
Work from Office
Hands on experience with AWS, Pyspark, SQL and AWS Services: Compute(EC2, Lambda), Storage (S3), Database, Orchestration (Apache Airflow, Step Function),ETL (Glue, EMR, Athena, Redshift), Infra, Data Migration (AWS DataSync, AWS DMS)
Posted 3 months ago
3 - 8 years
10 - 13 Lacs
Mumbai
Work from Office
Manage and optimize data pipelines for medallion architecture (Landing, Bronze, Silver, Gold) using AWS S3 Interested candidates can share their CV at urmi.veera@cygnusad.co.in or call on 85910 61941
Posted 3 months ago
5 - 10 years
13 - 16 Lacs
Bengaluru
Work from Office
Manage and optimize data pipelines within a medallion architecture (Landing, Bronze, Silver, Gold) leveraging AWS S3 Develop and execute data transformation workflows using Python scripts knowledge of PostgreSQL and database management principles Required Candidate profile Experience with other AWS services (e.g., Lambda, Glue, Redshift) Deep understanding of the AWS ecosystem, including S3, IAM, and related services.
Posted 3 months ago
5 - 10 years
18 - 30 Lacs
Pune
Hybrid
About the Company : `Headquartered in California, U.S.A., GSPANN provides consulting and IT services to global clients. We help clients transform how they deliver business value by helping them optimize their IT capabilities, practices, and operations with our experience in retail, high-technology, and manufacturing. With five global delivery centers and 1900+ employees, we provide the intimacy of a boutique consultancy with the capabilities of a large IT services firm. Please find the Details below: Job Position (Title) Bigdata Engineers / Leads Experience Required 4 to 10 Yrs Location- Bangalore, Hyderabad, Pune Technical Skill Requirements - Bigdata, Any Cloud (AWS/Azure/GCP), SQL, Pyspark, Spark, Python, Hive, Airflow Apply Link: Untitled form Pls share the below details also along with your profile: Full Name: Email Id: Contact No: Total years of experience - Relevant experience - Bigdata- Cloud- Rating on SQL : Any other Technology- Notice period - CTC- ECTC- Current company- Current location: Preferred location: Any offers, If yes, Pls mention- Interview availability for face to face OR Virtual?-Pls mention
Posted 3 months ago
5 - 10 years
18 - 30 Lacs
Bengaluru
Hybrid
About the Company : `Headquartered in California, U.S.A., GSPANN provides consulting and IT services to global clients. We help clients transform how they deliver business value by helping them optimize their IT capabilities, practices, and operations with our experience in retail, high-technology, and manufacturing. With five global delivery centers and 1900+ employees, we provide the intimacy of a boutique consultancy with the capabilities of a large IT services firm. Please find the Details below: Job Position (Title) Bigdata Engineers / Leads Experience Required 4 to 10 Yrs Location- Bangalore, Hyderabad, Pune Technical Skill Requirements - Bigdata, Any Cloud (AWS/Azure/GCP), SQL, Pyspark, Spark, Python, Hive, Airflow Apply Link: Untitled form Pls share the below details also along with your profile: Full Name: Email Id: Contact No: Total years of experience - Relevant experience - Bigdata- Cloud- Rating on SQL : Any other Technology- Notice period - CTC- ECTC- Current company- Current location: Preferred location: Any offers, If yes, Pls mention- Interview availability for face to face OR Virtual?-Pls mention
Posted 3 months ago
5 - 10 years
18 - 30 Lacs
Hyderabad
Hybrid
About the Company : `Headquartered in California, U.S.A., GSPANN provides consulting and IT services to global clients. We help clients transform how they deliver business value by helping them optimize their IT capabilities, practices, and operations with our experience in retail, high-technology, and manufacturing. With five global delivery centers and 1900+ employees, we provide the intimacy of a boutique consultancy with the capabilities of a large IT services firm. Please find the Details below: Job Position (Title) Bigdata Engineers / Leads Experience Required 4 to 10 Yrs Location- Bangalore, Hyderabad, Pune Technical Skill Requirements - Bigdata, Any Cloud (AWS/Azure/GCP), SQL, Pyspark, Spark, Python, Hive, Airflow Apply Link: Untitled form Pls share the below details also along with your profile: Full Name: Email Id: Contact No: Total years of experience - Relevant experience - Bigdata- Cloud- Rating on SQL : Any other Technology- Notice period - CTC- ECTC- Current company- Current location: Preferred location: Any offers, If yes, Pls mention- Interview availability for face to face OR Virtual?-Pls mention
Posted 3 months ago
4 - 9 years
18 - 30 Lacs
Pune
Hybrid
About the Company : `Headquartered in California, U.S.A., GSPANN provides consulting and IT services to global clients. We help clients transform how they deliver business value by helping them optimize their IT capabilities, practices, and operations with our experience in retail, high-technology, and manufacturing. With five global delivery centers and 1900+ employees, we provide the intimacy of a boutique consultancy with the capabilities of a large IT services firm. Please find the Details below: Job Position (Title) Bigdata Engineers / Leads Experience Required 4 to 10 Yrs Location Bangalore (CV Raman Nagar, Baghmane Road) Technical Skill Requirements - Bigdata, Any Cloud (AWS/Azure/GCP), SQL, Pyspark, Spark, Python, Hive, Airflow Pls share the below details also along with your profile: Full Name: Email Id: Contact No: Total years of experience - Relevant experience - Bigdata- Cloud- Rating on SQL : Any other Technology- Notice period - CTC- ECTC- Current company- Current location: Preferred location: Any offers, If yes, Pls mention- Interview availability for face to face OR Virtual?-Pls mention
Posted 3 months ago
4 - 9 years
18 - 30 Lacs
Bengaluru
Hybrid
About the Company : `Headquartered in California, U.S.A., GSPANN provides consulting and IT services to global clients. We help clients transform how they deliver business value by helping them optimize their IT capabilities, practices, and operations with our experience in retail, high-technology, and manufacturing. With five global delivery centers and 1900+ employees, we provide the intimacy of a boutique consultancy with the capabilities of a large IT services firm. Please find the Details below: Job Position (Title) Bigdata Engineers / Leads Experience Required 4 to 10 Yrs Location Bangalore (CV Raman Nagar, Baghmane Road) Technical Skill Requirements - Bigdata, Any Cloud (AWS/Azure/GCP), SQL, Pyspark, Spark, Python, Hive, Airflow Pls share the below details also along with your profile: Full Name: Email Id: Contact No: Total years of experience - Relevant experience - Bigdata- Cloud- Rating on SQL : Any other Technology- Notice period - CTC- ECTC- Current company- Current location: Preferred location: Any offers, If yes, Pls mention- Interview availability for face to face OR Virtual?-Pls mention
Posted 3 months ago
4 - 9 years
18 - 30 Lacs
Hyderabad
Hybrid
About the Company : `Headquartered in California, U.S.A., GSPANN provides consulting and IT services to global clients. We help clients transform how they deliver business value by helping them optimize their IT capabilities, practices, and operations with our experience in retail, high-technology, and manufacturing. With five global delivery centers and 1900+ employees, we provide the intimacy of a boutique consultancy with the capabilities of a large IT services firm. Please find the Details below: Job Position (Title) Bigdata Engineers / Leads Experience Required 4 to 10 Yrs Location Bangalore (CV Raman Nagar, Baghmane Road) Technical Skill Requirements - Bigdata, Any Cloud (AWS/Azure/GCP), SQL, Pyspark, Spark, Python, Hive, Airflow Pls share the below details also along with your profile: Full Name: Email Id: Contact No: Total years of experience - Relevant experience - Bigdata- Cloud- Rating on SQL : Any other Technology- Notice period - CTC- ECTC- Current company- Current location: Preferred location: Any offers, If yes, Pls mention- Interview availability for face to face OR Virtual?-Pls mention
Posted 3 months ago
5 - 10 years
11 - 21 Lacs
Pune
Work from Office
Exciting Career Opportunities at GSPANN Technologies! Are you a seasoned professional with expertise in Big Data, Spark, SQL, Redshift, Python/PySpark, Hive, and AWS (S3, EWR) ? Experience Required: 4+ years Send your resume to: heena.ruchwani@gspann.com Join us and be a part of our dynamic team!
Posted 3 months ago
5 - 10 years
11 - 21 Lacs
Gurgaon
Work from Office
Exciting Career Opportunities at GSPANN Technologies! Are you a seasoned professional with expertise in Big Data, Spark, SQL, Redshift, Python/PySpark, Hive, and AWS (S3, EWR) ? Experience Required: 4+ years Send your resume to: heena.ruchwani@gspann.com Join us and be a part of our dynamic team!
Posted 3 months ago
5 - 10 years
8 - 13 Lacs
Gurgaon
Work from Office
The Role and Responsibilities We have open positions ranging from Data Engineer to Lead Data Engineer, providing talented and motivated professionals with excellent career and growth opportunities. We seek individuals with relevant prior experience in quantitatively intense areas to join our team. Youll be working with varied and diverse teams to deliver unique and unprecedented solutions across all industries. In the data engineering track, you will be primarily responsible for developing and monitoring high-performance applications that can rapidly deploy latest machine learning frameworks and other advanced analytical techniques at scale. This role requires you to be a proactive learner and quickly pick up new technologies, whenever required. Most of the projects require handling big data, so you will be required to work on related technologies extensively. You will work closely with other team members to support project delivery and ensure client satisfaction. Your responsibilities will include Working alongside Oliver Wyman consulting teams and partners, engaging directly with clients to understand their business challenges Exploring large-scale data and designing, developing, and maintaining data/software pipelines, and ETL processes for internal and external stakeholders Explaining, refining, and developing the necessary architecture to guide stakeholders through the journey of model building Advocating application of best practices in data engineering, code hygiene, and code reviews Leading the development of proprietary data engineering, assets, ML algorithms, and analytical tools on varied projects Creating and maintaining documentation to support stakeholders and runbooks for operational excellence Working with partners and principals to shape proposals that showcase our data engineering and analytics capabilities Travelling to clients locations across the globe, when required, understanding their problems, and delivering appropriate solutions in collaboration with them Keeping up with emerging state-of-the-art data engineering techniques in your domain Your Attributes, Experience & Qualifications Bachelor's or masters degree in a computational or quantitative discipline from a top academic program (Computer Science, Informatics, Data Science, or related) Exposure to building cloud ready applications Exposure to test-driven development and integration Pragmatic and methodical approach to solutions and delivery with a focus on impact Independent worker with ability to manage workload and meet deadlines in a fast-paced environment Collaborative team player Excellent verbal and written communication skills and command of English Willingness to travel Respect for confidentiality Technical Background Prior experience in designing and deploying large-scale technical solutions Fluency in modern programming languages (Python is mandatory; R, SAS desired) Experience with AWS/Azure/Google Cloud, including familiarity with services such as S3, EC2, Lambda, Glue Strong SQL skills and experience with relational databases such as MySQL, PostgreSQL, or Oracle Experience with big data tools like Hadoop, Spark, Kafka Demonstrated knowledge of data structures and algorithms Familiarity with version control systems like GitHub or Bitbucket Familiarity with modern storage and computational frameworks Basic understanding of agile methodologies such as CI/CD, Applicant Resiliency, and Security Valued but not required: Compelling side projects or contributions to the Open-Source community Prior experience with machine learning frameworks (e.g., Scikit-Learn, TensorFlow, Keras/Theano, Torch, Caffe, MxNet) Familiarity with containerization technologies, such as Docker and Kubernetes Experience with UI development using frameworks such as Angular, VUE, or React Experience with NoSQL databases such as MongoDB or Cassandra Experience presenting at data science conferences and connections within the data science community Interest/background in Financial Services in particular, as well as other sectors where Oliver Wyman has a strategic presence
Posted 3 months ago
5 - 10 years
7 - 11 Lacs
Bengaluru
Work from Office
Project Role :Data Platform Engineer Project Role Description :Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills :AWS Glue Good to have skills :NA
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
In recent years, the demand for professionals with expertise in glue technologies has been on the rise in India. Glue jobs involve working with tools and platforms that help connect various systems and applications together seamlessly. This article aims to provide an overview of the glue job market in India, including top hiring locations, average salary ranges, career progression, related skills, and interview questions for aspiring job seekers.
Here are 5 major cities in India actively hiring for glue roles: 1. Bangalore 2. Pune 3. Hyderabad 4. Chennai 5. Mumbai
The estimated salary range for glue professionals in India varies based on experience levels. Entry-level professionals can expect to earn around INR 4-6 lakhs per annum, while experienced professionals with several years of experience can earn between INR 12-18 lakhs per annum.
In the field of glue technologies, a typical career progression may include roles such as: - Junior Developer - Senior Developer - Tech Lead - Architect
Apart from expertise in glue technologies, professionals in this field are often expected to have or develop skills in: - Data integration - ETL (Extract, Transform, Load) processes - Database management - Programming languages (e.g., Python, Java)
Here are 25 interview questions for glue roles: - What is Glue in the context of data integration? (basic) - Explain the difference between ETL and ELT. (basic) - How would you handle data quality issues in a glue job? (medium) - Can you explain how Glue works with Apache Spark? (medium) - What is the significance of schema evolution in Glue? (medium) - How do you optimize Glue jobs for performance? (medium) - Describe a scenario where you had to troubleshoot a failed Glue job. (medium) - What is a bookmark in Glue and how is it used? (medium) - How does Glue handle schema inference? (medium) - Have you worked with AWS Glue DataBrew? If so, explain your experience. (medium) - Explain how Glue handles schema evolution. (advanced) - How does Glue support job bookmarks for incremental processing? (advanced) - What are the differences between Glue ETL and Glue DataBrew? (advanced) - How do you handle nested JSON structures in Glue transformations? (advanced) - Explain a complex Glue job you have designed and implemented. (advanced) - How does Glue handle dynamic frame operations? (advanced) - What is the role of a Glue DynamicFrame in data transformation? (advanced) - How do you handle schema changes in Glue jobs? (advanced) - Explain how Glue can be integrated with other AWS services. (advanced) - What are the limitations of Glue that you have encountered in your projects? (advanced) - How do you monitor and debug Glue jobs in production environments? (advanced) - Describe your experience with Glue job scheduling and orchestration. (advanced) - How do you ensure security in Glue jobs that handle sensitive data? (advanced) - Explain the concept of lazy evaluation in Glue. (advanced) - How do you handle dependencies between Glue jobs in a workflow? (advanced)
As you prepare for interviews and explore opportunities in the glue job market in India, remember to showcase your expertise in glue technologies, related skills, and problem-solving abilities. With the right preparation and confidence, you can land a rewarding career in this dynamic and growing field. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2