Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 10.0 years
7 - 12 Lacs
Bengaluru
Work from Office
Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Platform Engineer, you will assist with the data platform blueprint and design, collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models. You will play a crucial role in shaping the data platform components. Roles & Responsibilities:- Expected to be an SME, collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Lead the implementation of data platform components.- Ensure data platform scalability and performance.- Conduct regular data platform audits.- Stay updated on emerging data platform technologies. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Strong understanding of cloud-based data platforms.- Experience with data integration and data modeling.- Hands-on experience with data pipeline orchestration tools.- Knowledge of data security and compliance standards. Additional Information:- The candidate should have a minimum of 5 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Bengaluru office.- A 15 years full-time education is required. Qualification 15 years full time education
Posted 1 month ago
8.0 - 13.0 years
8 - 12 Lacs
Bengaluru
Work from Office
Hello Talented Techie! We provide support in Project Services and Transformation, Digital Solutions and Delivery Management. We offer joint operations and digitalization services for Global Business Services and work closely alongside the entire Shared Services organization. We make optimal use of the possibilities of new technologies such as Business Process Management (BPM) and Robotics as enablers for efficient and effective processes. We are looking for Sr. AWS Cloud Architect Architect and Design Develop scalable and efficient data solutions using AWS services such as AWS Glue, Amazon Redshift, S3, Kinesis(Apache Kafka), DynamoDB, Lambda, AWS Glue(Streaming ETL) and EMR Integration Integrate real-time data from various Siemens organizations into our data lake, ensuring seamless data flow and processing. Data Lake Management Design and manage a large-scale data lake using AWS services like S3, Glue, and Lake Formation. Data Transformation Apply various data transformations to prepare data for analysis and reporting, ensuring data quality and consistency. Snowflake Integration Implement and manage data pipelines to load data into Snowflake, utilizing Iceberg tables for optimal performance and flexibility. Performance Optimization Optimize data processing pipelines for performance, scalability, and cost-efficiency. Security and Compliance Ensure that all solutions adhere to security best practices and compliance requirements. Collaboration Work closely with cross-functional teams, including data engineers, data scientists, and application developers, to deliver end-to-end solutions. Monitoring and Troubleshooting Implement monitoring solutions to ensure the reliability and performance of data pipelines. Troubleshoot and resolve any issues that arise. Youd describe yourself as: Experience 8+ years of experience in data engineering or cloud solutioning, with a focus on AWS services. Technical Skills Proficiency in AWS services such as AWS API, AWS Glue, Amazon Redshift, S3, Apache Kafka and Lake Formation. Experience with real-time data processing and streaming architectures. Big Data Querying Tools: Strong knowledge of big data querying tools (e.g., Hive, PySpark). Programming Strong programming skills in languages such as Python, Java, or Scala for building and maintaining scalable systems. Problem-Solving Excellent problem-solving skills and the ability to troubleshoot complex issues. Communication Strong communication skills, with the ability to work effectively with both technical and non-technical stakeholders. Certifications AWS certifications are a plus. Create a better #TomorrowWithUs! This role, based in Bangalore, is an individual contributor position. You may be required to visit other locations within India and internationally. In return, you'll have the opportunity to work with teams shaping the future. At Siemens, we are a collection of over 312,000 minds building the future, one day at a time, worldwide. Find out more about Siemens careers at
Posted 1 month ago
3.0 - 5.0 years
4 - 7 Lacs
Bengaluru
Work from Office
Educational Bachelor of Engineering,Bachelor Of Technology,Bachelor Of Science,Bachelor Of Comp. Applications,Master Of Engineering,Master Of Technology,Master Of Science,Master Of Comp. Applications Service Line Engineering Services Responsibilities A day in the life of an Infoscion As part of the Infosys consulting team, your primary role would be to lead the engagement effort of providing high-quality and value-adding consulting solutions to customers at different stages- from problem definition to diagnosis to solution design, development and deployment. You will review the proposals prepared by consultants, provide guidance, and analyze the solutions defined for the client business problems to identify any potential risks and issues. You will identify change Management requirements and propose a structured approach to client for managing the change using multiple communication mechanisms. You will also coach and create a vision for the team, provide subject matter training for your focus areas, motivate and inspire team members through effective and timely feedback and recognition for high performance. You would be a key contributor in unit-level and organizational initiatives with an objective of providing high-quality, value-adding consulting solutions to customers adhering to the guidelines and processes of the organization. If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Additional Responsibilities: Good knowledge on software configuration management systems Strong business acumen, strategy and cross-industry thought leadership Awareness of latest technologies and Industry trends Logical thinking and problem solving skills along with an ability to collaborate Two or three industry domain knowledge Understanding of the financial processes for various types of projects and the various pricing models available Client Interfacing skills Knowledge of SDLC and agile methodologies Project and Team management Technical and Professional : Technology-Cloud Platform-AWS Database-AWS,Technology-Container Platform-Docker,Technology-Container Platform-Kubernetes Preferred Skills: Technology-Cloud Platform-AWS Database-AWS Technology-Container Platform-Docker Technology-Container Platform-Kubernetes
Posted 1 month ago
10.0 - 15.0 years
15 - 19 Lacs
Bengaluru
Work from Office
Experience: 8+ years of experience in data engineering, specifically in cloud environments like AWS. Proficiency in PySpark for distributed data processing and transformation. Solid experience with AWS Glue for ETL jobs and managing data workflows. Hands-on experience with AWS Data Pipeline (DPL) for workflow orchestration. Strong experience with AWS services such as S3, Lambda, Redshift, RDS, and EC2. Technical Skills: Proficiency in Python and PySpark for data processing and transformation tasks. Deep understanding of ETL concepts and best practices. Familiarity with AWS Glue (ETL jobs, Data Catalog, and Crawlers). Experience building and maintaining data pipelines with AWS Data Pipeline or similar orchestration tools. Familiarity with AWS S3 for data storage and management, including file formats (CSV, Parquet, Avro). Strong knowledge of SQL for querying and manipulating relational and semi-structured data. Experience with Data Warehousing and Big Data technologies, specifically within AWS. Additional Skills: Experience with AWS Lambda for serverless data processing and orchestration. Understanding of AWS Redshift for data warehousing and analytics. Familiarity with Data Lakes, Amazon EMR, and Kinesis for streaming data processing. Knowledge of data governance practices, including data lineage and auditing. Familiarity with CI/CD pipelines and Git for version control. Experience with Docker and containerization for building and deploying applications. Design and Build Data PipelinesDesign, implement, and optimize data pipelines on AWS using PySpark, AWS Glue, and AWS Data Pipeline to automate data integration, transformation, and storage processes. ETL DevelopmentDevelop and maintain Extract, Transform, and Load (ETL) processes using AWS Glue and PySpark to efficiently process large datasets. Data Workflow AutomationBuild and manage automated data workflows using AWS Data Pipeline, ensuring seamless scheduling, monitoring, and management of data jobs. Data IntegrationWork with different AWS data storage services (e.g., S3, Redshift, RDS) to ensure smooth integration and movement of data across platforms. Optimization and ScalingOptimize and scale data pipelines for high performance and cost efficiency, utilizing AWS services like Lambda, S3, and EC2.
Posted 1 month ago
3.0 - 8.0 years
5 - 9 Lacs
Hyderabad
Work from Office
: Key responsibilities include the following: Develop and maintain scalable data pipelines using Pyspark and proven experience as developer with expertise in PySpark. Good to have knowledge on Ab Initio. Experience with distributed computing and parallel processing . Proficiency in SQL and experience with database systems. Collaborate with data engineers and data scientists to understand and fulfil data processing needs. Optimize and troubleshoot existing PySpark applications for performance improvements. Write clean, efficient, and well-documented code following best practices. Participate in design and code reviews. Develop and implement ETL processes to extract, transform, and load data. Ensure data integrity and quality throughout the data lifecycle. Stay current with the latest industry trends and technologies in big data and cloud computing
Posted 1 month ago
8.0 - 13.0 years
5 - 9 Lacs
Pune
Work from Office
Responsibilities / Qualifications: Candidate must have 5-6 years of IT working experience with at least 3 years of experience on AWS Cloud environment is preferred Ability to understand the existing system architecture and work towards the target architecture. Experience with data profiling activities, discover data quality challenges and document it. Experience with development and implementation of large-scale Data Lake and data analytics platform with AWS Cloud platform. Develop and unit test Data pipeline architecture for data ingestion processes using AWS native services. Experience with development on AWS Cloud using AWS data stores such as Redshift, RDS, S3, Glue Data Catalog, Lake formation, Apache Airflow, Lambda, etc Experience with development of data governance framework including the management of data, operating model, data policies and standards. Experience with orchestration of workflows in an enterprise environment. Working experience with Agile Methodology Experience working with source code management tools such as AWS Code Commit or GitHub Experience working with Jenkins or any CI/CD Pipelines using AWS Services Experience working with an on-shore / off-shore model and collaboratively work on deliverables. Good communication skills to interact with onshore team.
Posted 1 month ago
7.0 - 9.0 years
3 - 7 Lacs
Chandigarh
Work from Office
Educational Bachelor of Engineering,Bachelor Of Technology,Bachelor Of Science,Bachelor Of Comp. Applications,Master Of Engineering,Master Of Technology,Master Of Comp. Applications,Master Of Science Service Line Engineering Services Responsibilities A day in the life of an Infoscion As part of the Infosys consulting team, your primary role would be to get to the heart of customer issues, diagnose problem areas, design innovative solutions and facilitate deployment resulting in client delight. You will develop a proposal by owning parts of the proposal document and by giving inputs in solution design based on areas of expertise. You will plan the activities of configuration, configure the product as per the design, conduct conference room pilots and will assist in resolving any queries related to requirements and solution design You will conduct solution/product demonstrations, POC/Proof of Technology workshops and prepare effort estimates which suit the customer budgetary requirements and are in line with organization’s financial guidelines Actively lead small projects and contribute to unit-level and organizational initiatives with an objective of providing high quality value adding solutions to customers. If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Additional Responsibilities: Ability to develop value-creating strategies and models that enable clients to innovate, drive growth and increase their business profitability Good knowledge on software configuration management systems Awareness of latest technologies and Industry trends Logical thinking and problem solving skills along with an ability to collaborate Understanding of the financial processes for various types of projects and the various pricing models available Ability to assess the current processes, identify improvement areas and suggest the technology solutions One or two industry domain knowledge Client Interfacing skills Project and Team management Technical and Professional : Primary skills- Groovy, Java, Postgres, AWS Preferred Skills: Technology-Cloud Platform-AWS Database Technology-Java-Java - ALL Technology-Ruby on Rails-MySql Postgres Oracle cucumber
Posted 1 month ago
8.0 - 13.0 years
11 - 16 Lacs
Bengaluru
Work from Office
Educational Bachelor of Engineering,Bachelor Of Technology Service Line Strategic Technology Group Responsibilities Power Programmer is an important initiative within Global Delivery to develop a team of Full Stack Developers who will be working on complex engineering projects, platforms and marketplaces for our clients using emerging technologies., They will be ahead of the technology curve and will be constantly enabled and trained to be Polyglots., They are Go-Getters with a drive to solve end customer challenges and will spend most of their time in designing and coding, End to End contribution to technology oriented development projects., Providing solutions with minimum system requirements and in Agile Mode., Collaborate with Power Programmers., Open Source community and Tech User group., Custom Development of new Platforms & Solutions ,Opportunities., Work on Large Scale Digital Platforms and marketplaces., Work on Complex Engineering Projects using cloud native architecture ., Work with innovative Fortune 500 companies in cutting edge technologies., Co create and develop New Products and Platforms for our clients., Contribute to Open Source and continuously upskill in latest technology areas., Incubating tech user group Technical and Professional : Must HaveJava, Microservices, Spring boot (with knowledge of Cloud Implementation - AWS, Azure or GCP, Kubernetes/Docker), Designing Skills (Designing Patterns), Data Structures, Angular (Any front end tech stack) Preferred Skills: Technology-Cloud Platform-AWS Database-AWS Technology-Java-Core Java Technology-Microservices-Microservices API Management Technology-UI & Markup Language-Angular JS/Angular 1.x-Angular 2 Technology-Reactive Programming-react JS Technology-Algorithms and Data Structures-Algorithms and Data Structures - ALL Technology-Java-Springboot Technology-Full stack-Java Full stack Foundational -SDLC-Architecture Designing
Posted 1 month ago
6.0 - 11.0 years
3 - 7 Lacs
Pune
Work from Office
Experience :7-9 yrs Experience in AWS services must like S3, Lambda , Airflow, Glue, Athena, Lake formation ,Step functions etc. Experience in programming in JAVA and Python. Experience performing data analysis (NOT DATA SCIENCE) on AWS platforms Nice to have : Experience in a Big Data technologies (Terradata, Snowflake, Spark, Redshift, Kafka, etc.) Experience with data management process on AWS is a huge Plus Experience in implementing complex ETL transformations on AWS using Glue. Familiarity with relational database environment (Oracle, Teradata, etc.) leveraging databases, tables/views, stored procedures, agent jobs, etc.
Posted 1 month ago
8.0 - 13.0 years
3 - 7 Lacs
Hyderabad
Work from Office
P1-C3-STS Seeking a developer who has good Experience in Athena, Python code, Glue, Lambda, DMS , RDS, Redshift Cloud Formation and other AWS serverless resources. Can optimize data models for performance and efficiency. Able to write SQL queries to support data analysis and reporting Design, implement, and maintain the data architecture for all AWS data services. Work with stakeholders to identify business needs and requirements for data-related projects Design and implement ETL processes to load data into the data warehouse Good Experience in Athena, Python code, Glue, Lambda, DMS , RDS, Redshift, Cloud Formation and other AWS serverless resources Cloud Formation and other AWS serverless resources
Posted 1 month ago
3.0 - 7.0 years
4 - 8 Lacs
Hyderabad
Work from Office
Talend - Designing, developing, and documenting existing Talend ETL processes, technical architecture, data pipelines, and performance scaling using tools to integrate Talend data and ensure data quality in a big data environment. AWS / Snowflake - Design, develop, and maintain data models using SQL and Snowflake / AWS Redshift-specific features. Collaborate with stakeholders to understand the requirements of the data warehouse. Implement data security, privacy, and compliance measures. Perform data analysis, troubleshoot data issues, and provide technical support to end-users. Develop and maintain data warehouse and ETL processes, ensuring data quality and integrity. Stay current with new AWS/Snowflake services and features and recommend improvements to existing architecture. Design and implement scalable, secure, and cost-effective cloud solutions using AWS / Snowflake services. Collaborate with cross-functional teams to understand requirements and provide technical guidance.
Posted 1 month ago
8.0 - 13.0 years
4 - 8 Lacs
Hyderabad
Work from Office
Key Responsibilities Set up and maintain monitoring dashboards for ETL jobs using Datadog, including metrics, logs, and alerts. Monitor daily ETL workflows and proactively detect and resolve data pipeline failures or performance issues. Create Datadog Monitors for job status (success/failure), job duration, resource utilization, and error trends. Work closely with Data Engineering teams to onboard new pipelines and ensure observability best practices. Integrate Datadog with tools. Conduct root cause analysis of ETL failures and performance bottlenecks. Tune thresholds, baselines, and anomaly detection settings in Datadog to reduce false positives. Document incident handling procedures and contribute to improving overall ETL monitoring maturity. Participate in on call rotations or scheduled support windows to manage ETL health. Required Skills & Qualifications 3+ years of experience in ETL/data pipeline monitoring, preferably in a cloud or hybrid environment. Proficiency in using Datadog for metrics, logging, alerting, and dashboards. Strong understanding of ETL concepts and tools (e.g., Airflow, Informatica, Talend, AWS Glue, or dbt). Familiarity with SQL and querying large datasets. Experience working with Python, Shell scripting, or Bash for automation and log parsing. Understanding of cloud platforms (AWS/GCP/Azure) and services like S3, Redshift, BigQuery, etc. Knowledge of CI/CD and DevOps principles related to data infrastructure monitoring. Preferred Qualifications Experience with distributed tracing and APM in Datadog. Prior experience monitoring Spark, Kafka, or streaming pipelines. Familiarity with ticketing tools (e.g., Jira, ServiceNow) and incident management workflows.
Posted 1 month ago
6.0 - 11.0 years
4 - 8 Lacs
Hyderabad
Work from Office
Sr Developer with special emphasis and experience of 8 to 10 years on Python and Pyspark along with hands on experience on AWS Data components like AWS Glue, Athena etc.,. Also have good knowledge on Data ware house tools to understand the existing system. Candidate should also have experience on Datalake, Teradata and Snowflake. Should be good at terraform. 8-10 years of experience in designing and developing Python and Pyspark applications Creating or maintaining data lake solutions using Snowflake,taradata and other dataware house tools. Should have good knowledge and hands on experience on AWS Glue , Athena etc., Sound Knowledge on all Data lake concepts and able to work on data migration projects. Providing ongoing support and maintenance for applications, including troubleshooting and resolving issues. Expertise in practices like Agile, Peer reviews and CICD Pipelines.
Posted 1 month ago
5.0 - 10.0 years
8 - 12 Lacs
Hyderabad
Work from Office
Data Engineer Must have 9+ years of experience in below mentioned skills. Must HaveBig Data Concepts Python(Core Python- Able to write code), SQL, Shell Scripting, AWS S3 Good to HaveEvent-driven/AWA SQS, Microservices, API Development,Kafka, Kubernetes, Argo, Amazon Redshift, Amazon Aurora
Posted 1 month ago
4.0 - 9.0 years
4 - 8 Lacs
Hyderabad
Work from Office
SkillData Engineer RoleT3, T2 Key responsibility Data Engineer Must have 5+ years of experience in below mentioned skills. Must HaveBig Data Concepts , Python(Core Python- Able to write code), SQL, Shell Scripting, AWS S3 Good to HaveEvent-driven/AWA SQS, Microservices, API Development, Kafka, Kubernetes, Argo, Amazon Redshift, Amazon Aurora
Posted 1 month ago
4.0 - 9.0 years
6 - 11 Lacs
Hyderabad
Work from Office
Design, develop, and maintain ETL processes using Talend. Manage and optimize data pipelines on Amazon Redshift. Implement data transformation workflows using DBT (Data Build Tool). Write efficient, reusable, and reliable code in PySpark. Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver effective solutions. Ensure data quality and integrity through rigorous testing and validation. Stay updated with the latest industry trends and technologies in data engineering. : Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. Proven experience as a Data Engineer or similar role. High proficiency in Talend. Strong experience with Amazon Redshift. Expertise in DBT and PySpark. Experience with data modeling, ETL processes, and data warehousing. Familiarity with cloud platforms and services. Excellent problem-solving skills and attention to detail. Strong communication and teamwork abilities. Preferred Qualifications: Experience with other data engineering tools and frameworks. Knowledge of machine learning frameworks and libraries.
Posted 1 month ago
4.0 - 9.0 years
4 - 8 Lacs
Gurugram
Work from Office
Data Engineer Location PAN INDIA Workmode Hybrid Work Timing :2 Pm to 11 PM Primary Skill Data Engineer Experience in data engineering, with a proven focus on data ingestion and extraction using Python/PySpark.. Extensive AWS experience is mandatory, with proficiency in Glue, Lambda, SQS, SNS, AWS IAM, AWS Step Functions, S3, and RDS (Oracle, Aurora Postgres). 4+ years of experience working with both relational and non-relational/NoSQL databases is required. Strong SQL experience is necessary, demonstrating the ability to write complex queries from scratch. Also, experience in Redshift is required along with other SQL DB experience Strong scripting experience with the ability to build intricate data pipelines using AWS serverless architecture. understanding of building an end-to end Data pipeline. Strong understanding of Kinesis, Kafka, CDK. Experience with Kafka and ECS is also required. strong understanding of data concepts related to data warehousing, business intelligence (BI), data security, data quality, and data profiling is required Experience in Node Js and CDK. JDResponsibilities Lead the architectural design and development of a scalable, reliable, and flexible metadata-driven data ingestion and extraction framework on AWS using Python/PySpark. Design and implement a customizable data processing framework using Python/PySpark. This framework should be capable of handling diverse scenarios and evolving data processing requirements. Implement data pipeline for data Ingestion, transformation and extraction leveraging the AWS Cloud Services Seamlessly integrate a variety of AWS services, including S3,Glue, Kafka, Lambda, SQL, SNS, Athena, EC2, RDS (Oracle, Postgres, MySQL), AWS Crawler to construct a highly scalable and reliable data ingestion and extraction pipeline. Facilitate configuration and extensibility of the framework to adapt to evolving data needs and processing scenarios. Develop and maintain rigorous data quality checks and validation processes to safeguard the integrity of ingested data. Implement robust error handling, logging, monitoring, and alerting mechanisms to ensure the reliability of the entire data pipeline. QualificationsMust Have Over 6 years of hands-on experience in data engineering, with a proven focus on data ingestion and extraction using Python/PySpark. Extensive AWS experience is mandatory, with proficiency in Glue, Lambda, SQS, SNS, AWS IAM, AWS Step Functions, S3, and RDS (Oracle, Aurora Postgres). 4+ years of experience working with both relational and non-relational/NoSQL databases is required. Strong SQL experience is necessary, demonstrating the ability to write complex queries from scratch. Strong working experience in Redshift is required along with other SQL DB experience. Strong scripting experience with the ability to build intricate data pipelines using AWS serverless architecture. Complete understanding of building an end-to end Data pipeline. Nice to have Strong understanding of Kinesis, Kafka, CDK. A strong understanding of data concepts related to data warehousing, business intelligence (BI), data security, data quality, and data profiling is required. Experience in Node Js and CDK. Experience with Kafka and ECS is also required.
Posted 1 month ago
2.0 - 5.0 years
4 - 8 Lacs
Bengaluru
Work from Office
Seeking a skilled Data Engineer to work on cloud-based data pipelines and analytics platforms. The ideal candidate will have hands-on experience in PySpark and AWS, with proficiency in designing Data Lakes and working with modern data orchestration tools. Data Engineer to work on cloud-based data pipelines and analytics platforms PySpark and AWS, with proficiency in designing Data Lakes working with modern data orchestration tools
Posted 1 month ago
15.0 - 20.0 years
10 - 14 Lacs
Pune
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Talend ETL Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure project milestones are met, facilitating discussions to address challenges, and guiding your team in implementing effective solutions. You will also engage in strategic planning sessions to align project goals with organizational objectives, ensuring that all stakeholders are informed and involved in the development process. Your role will be pivotal in driving the success of application projects and fostering a collaborative environment among team members and other departments. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate training and development opportunities for team members to enhance their skills.- Monitor project progress and implement adjustments as necessary to meet deadlines. Professional & Technical Skills: - Must To Have Skills: Proficiency in Talend ETL.- Good To Have Skills: Experience with data integration tools and methodologies.- Strong understanding of data warehousing concepts and practices.- Experience in performance tuning and optimization of ETL processes.- Familiarity with cloud-based data solutions and architectures. Additional Information:- The candidate should have minimum 5 years of experience in Talend ETL.- This position is based at our Pune office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 month ago
6.0 - 11.0 years
8 - 13 Lacs
Hyderabad
Work from Office
JD for Data Engineer Python At least 5 to 8 years of experience in AWS Python programming and who can design, build, test & deploy the code. Candidate should have worked on LABMDA based APIs development. Should have experience in using following AWS servicesAWS SQS, AWS MSK, AWS RDS Aurora DB, BOTO 3. Very strong SQL knowledge is a must, should be able to understand build complex queries. He/she should be closely working with enterprise architect & other client teams at onsite as needed. Having experience in building solutions using Kafka would be good value addition(optional).
Posted 1 month ago
8.0 - 13.0 years
4 - 8 Lacs
Mumbai
Work from Office
Sr Devloper with special emphasis and experience of 8 to 10 years on Python and Pyspark along with hands on experience on AWS Data components like AWS Glue, Athena etc.,. Also have good knowledge on Data ware house tools to understand the existing system. Candidate should also have experience on Datalake, Teradata and Snowflake. Should be good at terraform. 8-10 years of experience in designing and developing Python and Pyspark applications Creating or maintaining data lake solutions using Snowflake,taradata and other dataware house tools. Should have good knowledge and hands on experience on AWS Glue , Athena etc., Sound Knowledge on all Data lake concepts and able to work on data migration projects. Providing ongoing support and maintenance for applications, including troubleshooting and resolving issues. Expertise in practices like Agile, Peer reviews and CICD Pipelines.
Posted 1 month ago
8.0 - 13.0 years
4 - 8 Lacs
Bengaluru
Work from Office
Experience: 8 years of experience in data engineering, specifically in cloud environments like AWS. Proficiency in PySpark for distributed data processing and transformation. Solid experience with AWS Glue for ETL jobs and managing data workflows. Hands-on experience with AWS Data Pipeline (DPL) for workflow orchestration. Strong experience with AWS services such as S3, Lambda, Redshift, RDS, and EC2. Technical Skills: Proficiency in Python and PySpark for data processing and transformation tasks. Deep understanding of ETL concepts and best practices. Familiarity with AWS Glue (ETL jobs, Data Catalog, and Crawlers). Experience building and maintaining data pipelines with AWS Data Pipeline or similar orchestration tools. Familiarity with AWS S3 for data storage and management, including file formats (CSV, Parquet, Avro). Strong knowledge of SQL for querying and manipulating relational and semi-structured data. Experience with Data Warehousing and Big Data technologies, specifically within AWS. Additional Skills: Experience with AWS Lambda for serverless data processing and orchestration. Understanding of AWS Redshift for data warehousing and analytics. Familiarity with Data Lakes, Amazon EMR, and Kinesis for streaming data processing. Knowledge of data governance practices, including data lineage and auditing. Familiarity with CI/CD pipelines and Git for version control. Experience with Docker and containerization for building and deploying applications. Design and Build Data PipelinesDesign, implement, and optimize data pipelines on AWS using PySpark, AWS Glue, and AWS Data Pipeline to automate data integration, transformation, and storage processes. ETL DevelopmentDevelop and maintain Extract, Transform, and Load (ETL) processes using AWS Glue and PySpark to efficiently process large datasets. Data Workflow AutomationBuild and manage automated data workflows using AWS Data Pipeline, ensuring seamless scheduling, monitoring, and management of data jobs. Data IntegrationWork with different AWS data storage services (e.g., S3, Redshift, RDS) to ensure smooth integration and movement of data across platforms. Optimization and ScalingOptimize and scale data pipelines for high performance and cost efficiency, utilizing AWS services like Lambda, S3, and EC2.
Posted 1 month ago
3.0 - 8.0 years
10 - 14 Lacs
Gurugram
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Data Analytics Good to have skills : Microsoft SQL Server, AWS RedshiftMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various stakeholders to gather requirements, overseeing the development process, and ensuring that the applications meet the specified needs. You will also engage in problem-solving discussions, providing insights and solutions to enhance application performance and user experience. Additionally, you will mentor team members, fostering a collaborative environment that encourages innovation and continuous improvement. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Facilitate knowledge sharing sessions to enhance team capabilities.- Analyze application performance metrics and implement improvements. Professional & Technical Skills: - Must To Have Skills: Proficiency in Data Analytics.- Good To Have Skills: Experience with Microsoft SQL Server, AWS Redshift.- Strong analytical skills to interpret complex data sets.- Experience with data visualization tools to present findings effectively.- Ability to work with large datasets and perform data cleaning and transformation. Additional Information:- The candidate should have minimum 3 years of experience in Data Analytics.- This position is based at our Gurugram office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 month ago
7.0 - 12.0 years
4 - 8 Lacs
Pune
Work from Office
1. ETLHands on experience of building data pipelines. Proficiency in two or more data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica 2. Big DataExperience of big data platforms such as Hadoop, Hive or Snowflake for data storage and processing 3. Data Warehousing & Database ManagementUnderstanding of Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design 4. Data Modeling & DesignGood exposure to data modeling techniques; design, optimization and maintenance of data models and data structures 5. LanguagesProficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala 6. DevOpsExposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Ab InitioExperience developing CoOp graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, ExpressIT, Data Profiler and ConductIT, ControlCenter, ContinuousFlows CloudGood exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & ControlsExposure to data validation, cleansing, enrichment and data controls ContainerizationFair understanding of containerization platforms like Docker, Kubernetes File FormatsExposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta OthersBasics of Job scheduler like Autosys. Basics of Entitlement management Certification on any of the above topics would be an advantage.
Posted 1 month ago
14.0 - 19.0 years
4 - 8 Lacs
Hyderabad
Work from Office
SkillData Engineer RoleT2, T1 Key responsibility Data Engineer Must have 9+ years of experience in below mentioned skills. Must HaveBig Data Concepts , Python(Core Python- Able to write code), SQL, Shell Scripting, AWS S3 Good to HaveEvent-driven/AWA SQS, Microservices, API Development, Kafka, Kubernetes, Argo, Amazon Redshift, Amazon Aurora
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough