Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2 - 6 years
12 - 16 Lacs
Pune
Work from Office
As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Design and Develop Data Solutions, Design and implement efficient data processing pipelines using AWS services like AWS Glue, AWS Lambda, Amazon S3, and Amazon Redshift. Develop and manage ETL (Extract, Transform, Load) workflows to clean, transform, and load data into structured and unstructured storage systems. Build scalable data models and storage solutions in Amazon Redshift, DynamoDB, and other AWS services. Data IntegrationIntegrate data from multiple sources including relational databases, third-party APIs, and internal systems to create a unified data ecosystem. Work with data engineers to optimize data workflows and ensure data consistency, reliability, and performance. Automation and OptimizationAutomate data pipeline processes to ensure efficiency Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing
Posted 1 month ago
4 - 9 years
12 - 16 Lacs
Hyderabad
Work from Office
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 5 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on AWS; Exposure to streaming solutions and message brokers like Kafka technologies. Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers AWS S3 , Redshift , and EMR for data storage and distributed processing. AWS Lambda , AWS Step Functions , and AWS Glue to build serverless, event-driven data workflows and orchestrate ETL processes
Posted 1 month ago
8 - 13 years
12 - 22 Lacs
Gurugram
Work from Office
Data & Information Architecture Lead 8 to 15 years - Gurgaon Summary An Excellent opportunity for Data Architect professionals with expertise in Data Engineering, Analytics, AWS and Database. Location Gurgaon Your Future Employer : A leading financial services provider specializing in delivering innovative and tailored solutions to meet the diverse needs of our clients and offer a wide range of services, including investment management, risk analysis, and financial consulting. Responsibilities Design and optimize architecture of end-to-end data fabric inclusive of data lake, data stores and EDW in alignment with EA guidelines and standards for cataloging and maintaining data repositories Undertake detailed analysis of the information management requirements across all systems, platforms & applications to guide the development of info. management standards Lead the design of the information architecture, across multiple data types working closely with various business partners/consumers, MIS team, AI/ML team and other departments to design, deliver and govern future proof data assets and solutions Design and ensure delivery excellence for a) large & complex data transformation programs, b) small and nimble data initiatives to realize quick gains, c) work with OEMs and Partners to bring the best tools and delivery methods. Drive data domain modeling, data engineering and data resiliency design standards across the micro services and analytics application fabric for autonomy, agility and scale Requirements Deep understanding of the data and information architecture discipline, processes, concepts and best practices Hands on expertise in building and implementing data architecture for large enterprises Proven architecture modelling skills, strong analytics and reporting experience Strong Data Design, management and maintenance experience Strong experience on data modelling tools Extensive experience in areas of cloud native lake technologies e.g. AWS Native Lake Solution onsibilities
Posted 1 month ago
3 - 8 years
4 - 9 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Role & responsibilities Design, build, and maintain scalable and efficient data pipelines and ETL/ELT processes. Develop and optimize data models for analytics and operational purposes in cloud-based data warehouses (e.g., Snowflake, Redshift, BigQuery). Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver reliable datasets. Implement data quality checks, monitoring, and alerting for pipelines. Work with structured and unstructured data across various sources (APIs, databases, streaming). Ensure data security, compliance, and governance practices are followed. Write clean, efficient, and testable code using Python, SQL, or Scala. Support the development of data catalogs and documentation. Participate in code reviews and contribute to best practices in data engineering. Preferred candidate profile 3- 9 years of hands-on experience in data engineering or a similar role. Strong proficiency in SQL and Python, Pyspark. Experience with data pipeline orchestration tools like Apache Airflow, Prefect, or Luigi.( Any Of the skill) Familiarity with cloud platforms such as AWS, Azure, or GCP (e.g., S3, Lambda, Glue, BigQuery, Dataflow).( Any of the Skill) Experience with big data tools such as Spark, Kafka, Hive, or Hadoop.(ANy One) Strong understanding of relational and non-relational databases. Exposure to CI/CD practices and tools (e.g., Git, Jenkins, Docker). Excellent problem-solving and communication skills.
Posted 1 month ago
4 - 8 years
10 - 15 Lacs
Pune
Remote
Position: AWS Data Engineer About bluCognition: bluCognition is an AI/ML based start-up specializing in developing data products leveraging alternative data sources and providing servicing support to our clients in financial services sector. Founded in2017, by some very named senior professionals from the financial services industry, the company is headquartered in the US, with the delivery centre based in Pune. We build all our solutions while leveraging the latest technology stack in AI, ML and NLP combined with decades of experience in risk management at some of the largest financial services firms in the world. Our clients are some of the biggest and the most progressive names in the financial services industry. We are entering a significant growth phase and are looking for individuals with entrepreneurial mindset who wants us to join in this exciting journey. https://www.blucognition.com The Role: We are seeking an experienced AWS Data Engineer to design, build, and manage scalable data pipelines and cloud-based solutions. In this role, you will work closely with data scientists, analysts, and software engineers to develop systems that support data-driven decision-making. Key Responsibilities: 1) Design, implement, and maintain robust, scalable, and efficient data pipelines using AWS services. 2) Develop ETL/ELT processes and automate data workflows for real-time and batch data ingestion. 3) Optimize data storage solutions (e.g., S3, Redshift, RDS, DynamoDB) for performance and cost-efficiency. 4) Build and maintain data lakes and data warehouses following best practices for security, governance, and compliance. 5) Collaborate with cross-functional teams to understand data requirements and deliver solutions that meet business needs. 6) Monitor, troubleshoot, and improve the reliability and quality of data systems. 7) Implement data quality checks, logging, and error handling in data pipelines. 8) Use Infrastructure as Code (IaC) tools like AWS Cloud Formation or Terraform for environment management. 9) Stay up-to-date with the latest developments in AWS services and big data technologies. Required Qualifications: 1) Bachelors degree in Computer Science, Information Systems, Engineering, or a related field. 2) 4+ years of experience working as a data engineer or in a similar role. 3) Strong experience with AWS services such as: AWS Glue, AWS Lambda, Amazon S3, Amazon Redshift, Amazon RDS, Amazon EMR, AWS Step Functions 4) Proficiency in SQL and Python. 5) Solid understanding of data modeling, ETL processes, and data warehouse architecture. 6) Experience with orchestration tools like Apache Airflow or AWS Managed Workflows. 7) Knowledge of security best practices for cloud environments (IAM, KMS, VPC, etc.). 8) Experience with monitoring and logging tools (CloudWatch, X-Ray, etc.). Preferred Qualifications: 1) Good to have - AWS Certified Data Analytics Specialty or AWS Certified Solutions Architect certification. 2) Experience with real-time data streaming technologies like Kinesis or Kafka. 3) Familiarity with DevOps practices and CI/CD pipelines. 4) Knowledge of machine learning data preparation and MLOps workflows. Soft Skills: 1) Excellent problem-solving and analytical skills. 2) Strong communication skills with both technical and non-technical stakeholders. 3) Ability to work independently and collaboratively in a team environment.
Posted 1 month ago
6 - 11 years
0 Lacs
Hyderabad
Hybrid
Job Title: AWS Data Engineer Hire Type: Fulltime Location: Hyderabad exp: Min 6+ years Only Immediate joiners Job Description: 6+ years of experience in data engineering, specifically in cloud environments like AWS. Proficiency in PySpark for distributed data processing and transformation. Solid experience with AWS Glue for ETL jobs and managing data workflows. Hands-on experience with AWS Data Pipeline (DPL) for workflow orchestration. Strong experience with AWS services such as S3, Lambda, Redshift, RDS, and EC2. Nice to have : Proficiency in Python and PySpark for data processing and transformation tasks. Deep understanding of ETL concepts and best practices. Familiarity with AWS Glue (ETL jobs, Data Catalog, and Crawlers). Experience building and maintaining data pipelines with AWS Data Pipeline or similar orchestration tools. Familiarity with AWS S3 for data storage and management, including file formats (CSV, Parquet, Avro). Strong knowledge of SQL for querying and manipulating relational and semi-structured data. Experience with Data Warehousing and Big Data technologies, specifically within AWS. if you are interested
Posted 1 month ago
6 - 11 years
30 - 35 Lacs
Indore, Hyderabad, Delhi / NCR
Work from Office
Support enhancements to the MDM platform Track System Performance Troubleshoot issues Resolve production issues Required Candidate profile 5+ years in Python and advanced SQL including profiling, refactoring Experience with REST API and Hands on Azure Databricks and ADF Experience with Markit EDM or Semarchy
Posted 1 month ago
3 - 6 years
10 - 20 Lacs
Gurugram
Work from Office
About ProcDNA: ProcDNA is a global consulting firm. We fuse design thinking with cutting-edge tech to create game-changing Commercial Analytics and Technology solutions for our clients. We're a passionate team of 275+ across 6 offices, all growing and learning together since our launch during the pandemic. Here, you won't be stuck in a cubicle - you'll be out in the open water, shaping the future with brilliant minds. At ProcDNA, innovation isn't just encouraged, it's ingrained in our DNA. What we are looking for: As the Associate Engagement Lead, youll leverage data to unravel complexities, adept at devising strategic solutions that deliver tangible results for our clients. We are seeking an individual who not only possesses the requisite expertise but also thrives in the dynamic landscape of a fast-paced global firm. What youll do Design/implement complex and scalable enterprise data processing and BI reporting solutions. Design, build and optimize ETL pipelines or underlying code to enhance data warehouse systems. Work towards optimizing the overall costs incurred due to system infrastructure, operations, change management etc. Deliver end-to-end data solutions across multiple infrastructures and applications Coach, mentor, and manage a team of junior associates to help them (plan tasks effectively and more). Demonstrate overall client stakeholder and project management skills (drive client meetings, creating realistic project timelines, planning and managing individual and team's task). Assist senior leadership in business development proposals focused on technology by providing SME support. Build strong partnerships with other teams to create valuable solutions Stay up to date with latest industry trends. Must have 3- 5 years of experience in designing/building data warehouses and BI reporting with a B.Tech/B.E background Prior experience of managing client stakeholders and junior team members. A background in managing Life Science clients is mandatory. Proficient in big data processing and cloud technologies like AWS, Azure, Databricks, PySpark, Hadoop etc. Along with proficiency in Informatica is a plus. Extensive hands-on experience in working with cloud data warehouses like Redshift, Azure, Snowflake etc. And Proficiency in SQL, Data modelling, designing ETL pipelines is a must. Intermediate to expert-level proficiency in Python. Proficiency in either Tableau, PowerBI, Qlik is a must. Should have worked on large datasets and complex data modelling projects. Prior experience in business development activities is mandatory. Domain knowledge of the pharma/healthcare landscape is mandatory.
Posted 1 month ago
5 - 10 years
20 - 35 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
EPAM has presence across 40+ countries globally with 55,000 + professionals & numerous delivery centers, Key locations are North America, Eastern Europe, Central Europe, Western Europe, APAC, Mid East & Development Centers in India (Hyderabad, Pune & Bangalore). Location: Gurgaon/Pune/Hyderabad/Bengaluru/Chennai Work Mode: Hybrid (2-3 days office in a week) Job Description: 5-14 Years of in Big Data & Data related technology experience Expert level understanding of distributed computing principles Expert level knowledge and experience in Apache Spark Hands on programming with Python Proficiency with Hadoop v2, Map Reduce, HDFS, Sqoop Experience with building stream-processing systems, using technologies such as Apache Storm or Spark-Streaming Good understanding of Big Data querying tools, such as Hive, and Impala Experience with integration of data from multiple data sources such as RDBMS (SQL Server, Oracle), ERP, Files Good understanding of SQL queries, joins, stored procedures, relational schemas Experience with NoSQL databases, such as HBase, Cassandra, MongoDB Knowledge of ETL techniques and frameworks Performance tuning of Spark Jobs Experience with native Cloud data services AWS/Azure Ability to lead a team efficiently Experience with designing and implementing Big data solutions Practitioner of AGILE methodology WE OFFER Opportunity to work on technical challenges that may impact across geographies Vast opportunities for self-development: online university, knowledge sharing opportunities globally, learning opportunities through external certifications Opportunity to share your ideas on international platforms Sponsored Tech Talks & Hackathons Possibility to relocate to any EPAM office for short and long-term projects Focused individual development Benefit package: • Health benefits, Medical Benefits• Retirement benefits• Paid time off• Flexible benefits Forums to explore beyond work passion (CSR, photography, painting, sports, etc
Posted 1 month ago
2 - 5 years
4 - 8 Lacs
Pune
Work from Office
About The Role The candidate must possess knowledge relevant to the functional area, and act as a subject matter expert in providing advice in the area of expertise, and also focus on continuous improvement for maximum efficiency. It is vital to focus on the high standard of delivery excellence, provide top-notch service quality and develop successful long-term business partnerships with internal/external customers by identifying and fulfilling customer needs. He/she should be able to break down complex problems into logical and manageable parts in a systematic way, and generate and compare multiple options, and set priorities to resolve problems. The ideal candidate must be proactive, and go beyond expectations to achieve job results and create new opportunities. He/she must positively influence the team, motivate high performance, promote a friendly climate, give constructive feedback, provide development opportunities, and manage career aspirations of direct reports. Communication skills are key here, to explain organizational objectives, assignments, and the big picture to the team, and to articulate team vision and clear objectives. Process ManagerRoles and responsibilities: Designing and implementing scalable, reliable, and maintainable data architectures on AWS. Developing data pipelines to extract, transform, and load (ETL) data from various sources into AWS environments. Creating and optimizing data models and schemas for performance and scalability using AWS services like Redshift, Glue, Athena, etc. Integrating AWS data solutions with existing systems and third-party services. Monitoring and optimizing the performance of AWS data solutions, ensuring efficient query execution and data retrieval. Implementing data security and encryption best practices in AWS environments. Documenting data engineering processes, maintaining data pipeline infrastructure, and providing support as needed. Working closely with cross-functional teams including data scientists, analysts, and stakeholders to understand data requirements and deliver solutions. Technical and Functional Skills: Typically, a bachelors degree in Computer Science, Engineering, or a related field is required, along with 5+ years of experience in data engineering and AWS cloud environments. Strong experience with AWS data services such as S3, EC2, Redshift, Glue, Athena, EMR, etc Proficiency in programming languages commonly used in data engineering such as Python, SQL, Scala, or Java. Experience in designing, implementing, and optimizing data warehouse solutions on Snowflake/ Amazon Redshift. Familiarity with ETL tools and frameworks (e.g., Apache Airflow, AWS Glue) for building and managing data pipelines. Knowledge of database management systems (e.g., PostgreSQL, MySQL, Amazon Redshift) and data lake concepts. Understanding of big data technologies such as Hadoop, Spark, Kafka, etc., and their integration with AWS. Proficiency in version control tools like Git for managing code and infrastructure as code (e.g., CloudFormation, Terraform). Ability to analyze complex technical problems and propose effective solutions. Strong verbal and written communication skills for documenting processes and collaborating with team members and stakeholders.
Posted 1 month ago
1 - 4 years
2 - 6 Lacs
Pune
Work from Office
About The Role The candidate must possess knowledge relevant to the functional area, and act as a subject matter expert in providing advice in the area of expertise, and also focus on continuous improvement for maximum efficiency. It is vital to focus on the high standard of delivery excellence, provide top-notch service quality and develop successful long-term business partnerships with internal/external customers by identifying and fulfilling customer needs. He/she should be able to break down complex problems into logical and manageable parts in a systematic way, and generate and compare multiple options, and set priorities to resolve problems. The ideal candidate must be proactive, and go beyond expectations to achieve job results and create new opportunities. He/she must positively influence the team, motivate high performance, promote a friendly climate, give constructive feedback, provide development opportunities, and manage career aspirations of direct reports. Communication skills are key here, to explain organizational objectives, assignments, and the big picture to the team, and to articulate team vision and clear objectives. Process ManagerRoles and responsibilities: Designing and implementing scalable, reliable, and maintainable data architectures on AWS. Developing data pipelines to extract, transform, and load (ETL) data from various sources into AWS environments. Creating and optimizing data models and schemas for performance and scalability using AWS services like Redshift, Glue, Athena, etc. Integrating AWS data solutions with existing systems and third-party services. Monitoring and optimizing the performance of AWS data solutions, ensuring efficient query execution and data retrieval. Implementing data security and encryption best practices in AWS environments. Documenting data engineering processes, maintaining data pipeline infrastructure, and providing support as needed. Working closely with cross-functional teams including data scientists, analysts, and stakeholders to understand data requirements and deliver solutions. Technical and Functional Skills: Typically, a bachelors degree in Computer Science, Engineering, or a related field is required, along with 5+ years of experience in data engineering and AWS cloud environments. Strong experience with AWS data services such as S3, EC2, Redshift, Glue, Athena, EMR, etc Proficiency in programming languages commonly used in data engineering such as Python, SQL, Scala, or Java. Experience in designing, implementing, and optimizing data warehouse solutions on Snowflake/ Amazon Redshift. Familiarity with ETL tools and frameworks (e.g., Apache Airflow, AWS Glue) for building and managing data pipelines. Knowledge of database management systems (e.g., PostgreSQL, MySQL, Amazon Redshift) and data lake concepts. Understanding of big data technologies such as Hadoop, Spark, Kafka, etc., and their integration with AWS. Proficiency in version control tools like Git for managing code and infrastructure as code (e.g., CloudFormation, Terraform). Ability to analyze complex technical problems and propose effective solutions. Strong verbal and written communication skills for documenting processes and collaborating with team members and stakeholders.
Posted 1 month ago
2 - 5 years
4 - 8 Lacs
Pune
Work from Office
About The Role Process Manager - AWS Data Engineer Mumbai/Pune| Full-time (FT) | Technology Services Shift Timings - EMEA(1pm-9pm)|Management Level - PM| Travel Requirements - NA The ideal candidate must possess in-depth functional knowledge of the process area and apply it to operational scenarios to provide effective solutions. The role enables to identify discrepancies and propose optimal solutions by using a logical, systematic, and sequential methodology. It is vital to be open-minded towards inputs and views from team members and to effectively lead, control, and motivate groups towards company objects. Additionally, candidate must be self-directed, proactive, and seize every opportunity to meet internal and external customer needs and achieve customer satisfaction by effectively auditing processes, implementing best practices and process improvements, and utilizing the frameworks and tools available. Goals and thoughts must be clearly and concisely articulated and conveyed, verbally and in writing, to clients, colleagues, subordinates, and supervisors. Process Manager Roles and responsibilities: Understand clients requirement and provide effective and efficient solution in AWS using Snowflake. Assembling large, complex sets of data that meet non-functional and functional business requirements Using Snowflake / Redshift Architect and design to create data pipeline and consolidate data on data lake and Data warehouse. Demonstrated strength and experience in data modeling, ETL development and data warehousing concepts Understanding data pipelines and modern ways of automating data pipeline using cloud based Testing and clearly document implementations, so others can easily understand the requirements, implementation, and test conditions Perform data quality testing and assurance as a part of designing, building and implementing scalable data solutions in SQL Technical and Functional Skills: AWS ServicesStrong experience with AWS data services such as S3, EC2, Redshift, Glue, Athena, EMR, etc. Programming LanguagesProficiency in programming languages commonly used in data engineering such as Python, SQL, Scala, or Java. Data WarehousingExperience in designing, implementing, and optimizing data warehouse solutions on Snowflake/ Amazon Redshift. ETL ToolsFamiliarity with ETL tools and frameworks (e.g., Apache Airflow, AWS Glue) for building and managing data pipelines. Database ManagementKnowledge of database management systems (e.g., PostgreSQL, MySQL, Amazon Redshift) and data lake concepts. Big Data TechnologiesUnderstanding of big data technologies such as Hadoop, Spark, Kafka, etc., and their integration with AWS. Version ControlProficiency in version control tools like Git for managing code and infrastructure as code (e.g., CloudFormation, Terraform). Problem-solving Skills: Ability to analyze complex technical problems and propose effective solutions. Communication Skills: Strong verbal and written communication skills for documenting processes and collaborating with team members and stakeholders. Education and ExperienceTypically, a bachelors degree in Computer Science, Engineering, or a related field is required, along with 5+ years of experience in data engineering and AWS cloud environments. About eClerx eClerx is a global leader in productized services, bringing together people, technology and domain expertise to amplify business results. Our mission is to set the benchmark for client service and success in our industry. Our vision is to be the innovation partner of choice for technology, data analytics and process management services. Since our inception in 2000, we've partnered with top companies across various industries, including financial services, telecommunications, retail, and high-tech. Our innovative solutions and domain expertise help businesses optimize operations, improve efficiency, and drive growth. With over 18,000 employees worldwide, eClerx is dedicated to delivering excellence through smart automation and data-driven insights. At eClerx, we believe in nurturing talent and providing hands-on experience. About eClerx Technology eClerxs Technology Group collaboratively delivers Analytics, RPA, AI, and Machine Learning digital technologies that enable our consultants to help businesses thrive in a connected world. Our consultants and specialists partner with our global clients and colleagues to build and implement digital solutions through a broad spectrum of activities. To know more about us, visit https://eclerx.com eClerx is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability or protected veteran status, or any other legally protected basis, per applicable law
Posted 1 month ago
7 - 12 years
30 - 40 Lacs
Pune, Bengaluru, Delhi / NCR
Hybrid
Support enhancements to the MDM platform Develop pipelines using snowflake python SQL and airflow Track System Performance Troubleshoot issues Resolve production issues Required Candidate profile 5+ years of hands on expert level Snowflake, Python, orchestration tools like Airflow Good understanding of investment domain Experience with dbt, Cloud experience (AWS, Azure) DevOps
Posted 1 month ago
4 - 8 years
0 - 2 Lacs
Chennai, Bengaluru
Work from Office
We are seeking a highly motivated and skilled Lead Engineer to join our team for a critical migration project. This role will focus on migrating data and services from on-premise or legacy systems to cloud platforms (preferably AWS). The ideal candidate will have a solid background in software engineering, cloud technologies, and hands-on experience with data and application migration projects. Responsibilities Key Responsibilities: Collaborate with cross-functional teams to gather requirements and define migration strategies. Develop and implement migration processes to move legacy applications and data to cloud platforms like AWS. Write scripts and automation to support data migration, system configuration, and cloud infrastructure provisioning. Ensure the migration adheres to performance, security, and compliance standards. Identify potential issues, troubleshoot, and implement fixes during the migration process. Maintain documentation of migration processes and post-migration maintenance plans. Provide technical support post-migration to ensure smooth operation of the migrated systems.
Posted 1 month ago
4 - 8 years
15 - 25 Lacs
Chennai, Bengaluru, Hyderabad
Hybrid
Hands-on We are looking for AWS Data Engineer Permanent Role. Experience : 4 to 8 Years Location : Hyderabad / Chennai/Noida/Pune/Bangalore NP-Immediate - Skills: Expertise in Data warehousing and ETL Design and implementation Hands on experience with Programming language like Python Good understanding of Spark architecture along with internals Hand on experience using AWS services like Glue (Pyspark), Lambda, S3, Athena, Experience on Snowflake is good to have Hands on experience on implementing different loading strategies like SCD1 and SCD2, Table/ partition refresh, insert update, Swap Partitions, Experience in Parallel Loading and Dependencies orchestrations Awareness of scheduling and orchestration tools Experience on RDBMS systems and concepts Expertise in writing and complex SQL queries and developing Database components including creating views, stored procedures, triggers etc. Create test cases and perform unit testing of ETL Jobs
Posted 1 month ago
12 - 17 years
10 - 14 Lacs
Bengaluru
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : AWS Glue Good to have skills : Data Engineering, AWS BigData, PySpark Minimum 12 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your day will involve overseeing the application development process and ensuring successful project delivery. Roles & Responsibilities: Expected to be an SME Collaborate and manage the team to perform Responsible for team decisions Engage with multiple teams and contribute on key decisions Expected to provide solutions to problems that apply across multiple teams Lead the application development process Coordinate with stakeholders to gather requirements Ensure timely project delivery Professional & Technical Skills: Must To Have Skills: Proficiency in AWS Glue, Data Engineering, PySpark, AWS BigData Strong understanding of cloud computing principles Experience in designing and implementing data pipelines Knowledge of ETL processes and data transformation Familiarity with data warehousing concepts Additional Information: The candidate should have a minimum of 12 years of experience in AWS Glue This position is based at our Bengaluru office A 15 years full-time education is required Qualification 15 years full time education
Posted 1 month ago
6 - 11 years
15 - 30 Lacs
Bengaluru, Hyderabad, Gurgaon
Work from Office
Were Hiring: Sr. AWS Data Engineer – GSPANN Technologies Locations: Bangalore, Pune, Hyderabad, Gurugram Experience: 6+ Years | Immediate Joiners Only Looking for experts in: AWS Services: Glue, Redshift, S3, Lambda, Athena Big Data: Spark, Hadoop, Kafka Languages: Python, SQL, Scala ETL & Data Engineering Apply now: heena.ruchwani@gspann.com #AWSDataEngineer #HiringNow #DataEngineering #GSPANN
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane