Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
5.0 - 8.0 years
18 - 30 Lacs
Hyderabad
Work from Office
AWS Data Engineer with Glue, Terraform, Business Intelligence (Tableau) development * Design, develop & maintain AWS data pipelines using Glue, Lambda & Redshift * Collaborate with BI team on ETL processes & dashboard creation with Tableau
Posted 6 days ago
6.0 - 10.0 years
7 - 14 Lacs
Bengaluru
Hybrid
Roles and Responsibilities Architect and incorporate an effective Data framework enabling end to end Data Solution. Understand business needs, use cases and drivers for insights and translate them into detailed technical specifications. Create epics, features and user stories with clear acceptance criteria for execution and delivery by the data engineering team. Create scalable and robust data solution designs that incorporate governance, security and compliance aspects. Develop and maintain logical and physical data models and work closely with data engineers, data analysts and data testers for successful implementation of them. Analyze, assess and design data integration strategies across various sources and platforms. Create project plans and timelines while monitoring and mitigating risks and controlling progress of the project. Conduct daily scrum with the team with a clear focus on meeting sprint goals and timely resolution of impediments. Act as a liaison between technical teams and business stakeholders and ensure. Guide and mentor the team for best practices on Data solutions and delivery frameworks. Actively work, facilitate and support the stakeholders/ clients to complete User Acceptance Testing ensure there is strong adoption of the data products after the launch. Defining and measuring KPIs/KRA for feature(s) and ensuring the Data roadmap is verified through measurable outcomes Prerequisites 5 to 8 years of professional, hands on experience building end to end Data Solution on Cloud based Data Platforms including 2+ years working in a Data Architect role. Proven hands on experience in building pipelines for Data Lakes, Data Lake Houses, Data Warehouses and Data Visualization solutions Sound understanding of modern Data technologies like Databricks, Snowflake, Data Mesh and Data Fabric. Experience in managing Data Life Cycle in a fast-paced, Agile / Scrum environment. Excellent spoken and written communication, receptive listening skills, and ability to convey complex ideas in a clear, concise fashion to technical and non-technical audiences Ability to collaborate and work effectively with cross functional teams, project stakeholders and end users for quality deliverables withing stipulated timelines Ability to manage, coach and mentor a team of Data Engineers, Data Testers and Data Analysts. Strong process driver with expertise in Agile/Scrum framework on tools like Azure DevOps, Jira or Confluence Exposure to Machine Learning, Gen AI and modern AI based solutions. Experience Technical Lead Data Analytics with 6+ years of overall experience out of which 2+ years is on Data architecture. Education Engineering degree from a Tier 1 institute preferred. Compensation The compensation structure will be as per industry standards
Posted 1 week ago
5.0 - 8.0 years
1 - 1 Lacs
Hyderabad
Hybrid
Location: Hyderabad (Hybrid) Key Responsibilities: 1. Data Engineering (AWS Glue & AWS Services): Design, develop, and optimize ETL pipelines using AWS Glue (PySpark). Manage and transform structured and unstructured data from multiple sources into AWS S3, Redshift, or Snowflake. Work with AWS Lambda, S3, Athena, Redshift for data orchestration. Implement data lake and data warehouse solutions in AWS. 2. Infrastructure as Code (Terraform & AWS Services): Design and deploy AWS infrastructure using Terraform. Automate resource provisioning and manage Infrastructure as Code. Monitor and optimize cloud costs, security, and compliance. Maintain and improve CI/CD pipelines for deploying data applications. 3. Business Intelligence (Tableau Development & Administration): Develop interactive dashboards and reports using Tableau. Connect Tableau with AWS data sources such as Redshift, Athena, and Snowflake. Optimize SQL queries and extracts for performance efficiency. Manage Tableau Server administration, including security, access controls, and performance tuning. Required Skills & Experience: 5+ years of experience in AWS Data Engineering with Glue, Redshift, and S3. Strong expertise in ETL development using AWS Glue (PySpark, Scala, or Python). Experience with Terraform for AWS infrastructure automation. Proficiency in SQL, Python, or Scala for data processing. Hands-on experience in Tableau development & administration. Strong understanding of cloud security, IAM roles, and permissions. Experience with CI/CD pipelines (Git, Jenkins, AWS Code Pipeline, etc.). Knowledge of data modeling, warehousing, and performance optimization. Please share your resume to: +91 9361912009
Posted 1 week ago
9.0 - 12.0 years
35 - 40 Lacs
Bengaluru
Work from Office
We are seeking an experienced AWS Architect with a strong background in designing and implementing cloud-native data platforms. The ideal candidate should possess deep expertise in AWS services such as S3, Redshift, Aurora, Glue, and Lambda, along with hands-on experience in data engineering and orchestration tools. Strong communication and stakeholder management skills are essential for this role. Key Responsibilities Design and implement end-to-end data platforms leveraging AWS services. Lead architecture discussions and ensure scalability, reliability, and cost-effectiveness. Develop and optimize solutions using Redshift, including stored procedures, federated queries, and Redshift Data API. Utilize AWS Glue and Lambda functions to build ETL/ELT pipelines. Write efficient Python code and data frame transformations, along with unit testing. Manage orchestration tools such as AWS Step Functions and Airflow. Perform Redshift performance tuning to ensure optimal query execution. Collaborate with stakeholders to understand requirements and communicate technical solutions clearly. Required Skills & Qualifications Minimum 9 years of IT experience with proven AWS expertise. Hands-on experience with AWS services: S3, Redshift, Aurora, Glue, and Lambda . Mandatory experience working with AWS Redshift , including stored procedures and performance tuning. Experience building end-to-end data platforms on AWS . Proficiency in Python , especially working with data frames and writing testable, production-grade code. Familiarity with orchestration tools like Airflow or AWS Step Functions . Excellent problem-solving skills and a collaborative mindset. Strong verbal and written communication and stakeholder management abilities. Nice to Have Experience with CI/CD for data pipelines. Knowledge of AWS Lake Formation and Data Governance practices.
Posted 1 week ago
5.0 - 8.0 years
15 - 18 Lacs
Hyderabad, Bengaluru
Hybrid
Cloud and AWS Expertise: In-depth knowledge of AWS services related to data engineering: EC2, S3, RDS, DynamoDB, Redshift, Glue, Lambda, Step Functions, Kinesis, Iceberg, EMR, and Athena. Strong understanding of cloud architecture and best practices for high availability and fault tolerance. Data Engineering Concepts: Expertise in ETL/ELT processes, data modeling, and data warehousing. Knowledge of data lakes, data warehouses, and big data processing frameworks like Apache Hadoop and Spark. Proficiency in handling structured and unstructured data. Programming and Scripting: Proficiency in Python, Pyspark and SQL for data manipulation and pipeline development. Expertise in working with data warehousing solutions like Redshift.
Posted 1 week ago
4.0 - 9.0 years
12 - 22 Lacs
Gurugram
Work from Office
To Apply - Submit Details via Google Form - https://forms.gle/8SUxUV2cikzjvKzD9 As a Senior Consultant in our Consulting team, youll build and nurture positive working relationships with teams and clients with the intention to exceed client expectations Seeking experienced AWS Data Engineers to design, implement, and maintain robust data pipelines and analytics solutions using AWS services. The ideal candidate will have a strong background in AWS data services, big data technologies, and programming languages. Role & responsibilities 1. Design and implement scalable, high-performance data pipelines using AWS services 2. Develop and optimize ETL processes using AWS Glue, EMR, and Lambda 3. Build and maintain data lakes using S3 and Delta Lake 4. Create and manage analytics solutions using Amazon Athena and Redshift 5. Design and implement database solutions using Aurora, RDS, and DynamoDB 6. Develop serverless workflows using AWS Step Functions 7. Write efficient and maintainable code using Python/PySpark, and SQL/PostgrSQL 8. Ensure data quality, security, and compliance with industry standards 9. Collaborate with data scientists and analysts to support their data needs 10. Optimize data architecture for performance and cost-efficiency 11. Troubleshoot and resolve data pipeline and infrastructure issues Preferred candidate profile 1. Bachelors degree in computer science, Information Technology, or related field 2. Relevant years of experience as a Data Engineer, with at least 60% of experience focusing on AWS 3. Strong proficiency in AWS data services: Glue, EMR, Lambda, Athena, Redshift, S3 4. Experience with data lake technologies, particularly Delta Lake 5. Expertise in database systems: Aurora, RDS, DynamoDB, PostgreSQL 6. Proficiency in Python and PySpark programming 7. Strong SQL skills and experience with PostgreSQL 8. Experience with AWS Step Functions for workflow orchestration Technical Skills: - AWS Services: Glue, EMR, Lambda, Athena, Redshift, S3, Aurora, RDS, DynamoDB , Step Functions - Big Data: Hadoop, Spark, Delta Lake - Programming: Python, PySpark - Databases: SQL, PostgreSQL, NoSQL - Data Warehousing and Analytics - ETL/ELT processes - Data Lake architectures - Version control: Git - Agile methodologies
Posted 1 week ago
4.0 - 9.0 years
12 - 22 Lacs
Gurugram, Bengaluru
Work from Office
To Apply - Submit Details via Google Form - https://forms.gle/8SUxUV2cikzjvKzD9 As a Senior Consultant in our Consulting team, youll build and nurture positive working relationships with teams and clients with the intention to exceed client expectations Seeking experienced AWS Data Engineers to design, implement, and maintain robust data pipelines and analytics solutions using AWS services. The ideal candidate will have a strong background in AWS data services, big data technologies, and programming languages. Role & responsibilities 1. Design and implement scalable, high-performance data pipelines using AWS services 2. Develop and optimize ETL processes using AWS Glue, EMR, and Lambda 3. Build and maintain data lakes using S3 and Delta Lake 4. Create and manage analytics solutions using Amazon Athena and Redshift 5. Design and implement database solutions using Aurora, RDS, and DynamoDB 6. Develop serverless workflows using AWS Step Functions 7. Write efficient and maintainable code using Python/PySpark, and SQL/PostgrSQL 8. Ensure data quality, security, and compliance with industry standards 9. Collaborate with data scientists and analysts to support their data needs 10. Optimize data architecture for performance and cost-efficiency 11. Troubleshoot and resolve data pipeline and infrastructure issues Preferred candidate profile 1. Bachelors degree in computer science, Information Technology, or related field 2. Relevant years of experience as a Data Engineer, with at least 60% of experience focusing on AWS 3. Strong proficiency in AWS data services: Glue, EMR, Lambda, Athena, Redshift, S3 4. Experience with data lake technologies, particularly Delta Lake 5. Expertise in database systems: Aurora, RDS, DynamoDB, PostgreSQL 6. Proficiency in Python and PySpark programming 7. Strong SQL skills and experience with PostgreSQL 8. Experience with AWS Step Functions for workflow orchestration Technical Skills: - AWS Services: Glue, EMR, Lambda, Athena, Redshift, S3, Aurora, RDS, DynamoDB , Step Functions - Big Data: Hadoop, Spark, Delta Lake - Programming: Python, PySpark - Databases: SQL, PostgreSQL, NoSQL - Data Warehousing and Analytics - ETL/ELT processes - Data Lake architectures - Version control: Git - Agile methodologies
Posted 1 week ago
2.0 - 7.0 years
12 - 18 Lacs
Vadodara
Work from Office
- Experience of implementing and delivering data solutions and pipelines on AWS Cloud Platform. Design/ implement, and maintain the data architecture for all AWS data services - A strong understanding of data modelling, data structures, databases (Redshift), and ETL processes - Work with stakeholders to identify business needs and requirements for data-related projects - Strong SQL and/or Python or PySpark knowledge - Creating data models that can be used to extract information from various sources & store it in a usable format - Optimize data models for performance and efficiency - Write SQL queries to support data analysis and reporting - Monitor and troubleshoot data pipelines - Collaborate with software engineers to design and implement data-driven features - Perform root cause analysis on data issues - Maintain documentation of the data architecture and ETL processes - Identifying opportunities to improve performance by improving database structure or indexing methods - Maintaining existing applications by updating existing code or adding new features to meet new requirements - Designing and implementing security measures to protect data from unauthorized access or misuse - Recommending infrastructure changes to improve capacity or performance - Experience in Process industry
Posted 1 week ago
8.0 - 13.0 years
20 - 32 Lacs
Bengaluru
Hybrid
Job Title: Senior Data Engineer Experience: 9+ Years Location: Whitefield, Bangalore Notice Period: Serving or Immediate joiners. Role & Responsibilities: Design and implement scalable data pipelines for ingesting, transforming, and loading data from diverse sources and tools. Develop robust data models to support analytical and reporting requirements. Automate data engineering processes using appropriate scripting languages and frameworks. Collaborate with engineers, process managers, and data scientists to gather requirements and deliver effective data solutions. Serve as a liaison between engineering and business teams on all data-related initiatives. Automate monitoring and alerting for data pipelines, products, and dashboards; provide support for issue resolution including on-call responsibilities. Write optimized and modular SQL queries, including view and table creation as required. Define and implement best practices for data validation, ensuring alignment with enterprise standards. Manage QA data environments, including test data creation and maintenance. Qualifications: 9+ years of experience in data engineering or a related field. Proven experience with Agile software development practices. Strong SQL skills and experience working with both RDBMS and NoSQL databases. Hands-on experience with cloud-based data warehousing platforms such as Snowflake and Amazon Redshift . Proficiency with cloud technologies, preferably AWS . Deep knowledge of data modeling , data warehousing , and data lake concepts. Practical experience with ETL/ELT tools and frameworks. 5+ years of experience in application development using Python , SQL , Scala , or Java . Experience in working with real-time data streaming and associated platforms. Note: The professional should be based out of Bangalore, as one technical round has to be taken F2F from Bellandur, Bangalore office.
Posted 1 week ago
6.0 - 10.0 years
10 - 15 Lacs
Noida, Kolkata, Bengaluru
Hybrid
We are Hiring for IT Professionals , for more details please refer below job descriptions; Job Type: Contractual Location : PAN India Candidates from any city in India are welcome to apply. Client : MNC Position 1 : Oracle DBA with AWS & Redshift Experience Experience Required: 6+ Years Employment Type: Contract Work Mode: Hybrid Location: PAN India Job Summary: We are seeking a highly skilled and experienced Oracle Database Administrator (DBA) with strong expertise in AWS cloud services and Amazon Redshift . The ideal candidate will be responsible for managing, monitoring, and optimizing database environments to ensure high availability, performance, and security in a hybrid cloud setup. Key Responsibilities: Administer and maintain Oracle databases (versions 12c/19c and above) Design, implement, and manage cloud-based database solutions using AWS (RDS, EC2, S3, IAM, etc.) Manage and optimize Amazon Redshift data warehouse solutions Perform performance tuning, backup & recovery, patching, and upgrade activities Implement and maintain database security, integrity, and high availability solutions Handle database migrations from on-premise to cloud (AWS) Automate database processes using scripting (Shell, Python, etc.) Collaborate with development, DevOps, and infrastructure teams Monitor system health, performance, and proactively address issues Required Skills: 6+ years of hands-on experience as an Oracle DBA Strong experience with AWS services related to database hosting and management Expertise in Amazon Redshift architecture, performance tuning, and data loading strategies Proficiency in SQL, PL/SQL, and scripting languages Solid understanding of database backup strategies and disaster recovery Experience with tools like Oracle Enterprise Manager, CloudWatch, and other monitoring tools Excellent analytical and troubleshooting skills Preferred Qualifications: AWS Certification (e.g., AWS Certified Database - Specialty, Solutions Architect) Experience with data migration tools and methodologies Familiarity with PostgreSQL or other relational databases is a plus Position 2 : Flexera Architect Experience: 5+ Years Employment Type: Contract Work Mode: Hybrid Location: PAN India Key Responsibilities: Architect and implement Flexera solutions including Flexera One, FlexNet Manager Suite, and Admi Studio. Collaborate with IT, procurement, and business teams to understand software asset life cycle requirements and translate them into Flexera-based solutions. Optimize software usage and licensing costs through in-depth license position analysis, usage tracking, and compliance reporting. Define policies and workflows for Software Asset Management (SAM) and drive adoption across the organisation. Develop and maintain integration with CMDB, SCCM, ServiceNow, and other ITSM/ITOM tools. Design and deliver dashboards, reports, and KPIs using Flexeras analytics tools. Ensure compliance with software vendor audits and licensing requirements. Provide subject matter expertise during audits, renewals, and true-ups for vendors such as Microsoft, Oracle, IBM, Adobe, etc. Train internal stakeholders and support teams on Flexera tools and SAM practices. Troubleshoot and resolve issues related to Flexera configurations, agents, and data collection. Required Skills & Qualifications: Bachelors degree in Computer Science, Information Technology, or a related field. Minimum 5 years of hands-on experience with Flexera products (Flexera One / FlexNet Manager Suite). Strong understanding of software licensing models (perpetual, subscription, cloud, user/device-based). Experience with ITAM/SAM processes and best practices. Proficient in software discovery, normalization, and reconciliation. Familiarity with integration of Flexera with tools like SCCM, ServiceNow, Tanium, or JAMF. Strong analytical, problem-solving, and communication skills. Experience with scripting or automation (PowerShell, SQL) is a plus. Position 3: Senior Micro Focus Specialist Experience Required: 15+ Years Employment Type: Contract Work Mode: Hybrid Location: PAN India Key Responsibilities: Lead end-to-end implementation and optimization of Micro Focus solutions such as ALM/QC, UFT, LoadRunner, Service Manager, Operations Orchestration, and SMAX. Analyze enterprise needs and recommend appropriate Micro Focus tools to address system, service management, performance testing, and automation requirements. Architect and integrate Micro Focus platforms into enterprise ecosystems ensuring seamless interoperability with other ITSM, DevOps, and monitoring tools. Provide hands-on support, upgrades, patching, and performance tuning of existing Micro Focus environments. Create technical documentation, SOPs, and system architecture diagrams. Mentor junior team members and provide leadership in troubleshooting complex system issues. Collaborate with stakeholders to define KPIs, implement monitoring solutions, and ensure SLAs are met. Ensure security and compliance of all Micro Focus solutions with enterprise policies. Act as a Subject Matter Expert (SME) for RFPs, audits, solution proposals, and enterprise digital transformation initiatives. Key Requirements: 15+ years of total IT experience with at least 10 years hands-on experience in Micro Focus tools. Expertise in one or more of the following: Micro Focus ALM/QC, UFT, LoadRunner, SMAX, Service Manager, Operations Bridge, or Data Protector. Experience with scripting languages such as VBScript, JavaScript, or PowerShell. Strong understanding of ITIL processes, service delivery, and ITSM best practices. Prior experience in implementing Micro Focus in hybrid cloud or enterprise environments. Ability to lead teams and manage cross-functional collaboration. Contract Details: Type: Contract (Long-term / Project-based) Location: Open to candidates across India (PAN India) Mode: Hybrid (Combination of remote and on-site, based on project needs) Interested candidate's send their resume to sujoy@prohrstrategies.com Job Type: Contractual Contract length: 12 months
Posted 1 week ago
5.0 - 10.0 years
9 - 19 Lacs
Kolkata, Hyderabad, Pune
Work from Office
• Minimum 5 years + working exp as Databricks Developer • Minimum 3 years + working exp on Redshift, Python, PySpark, and AWS • Associate should hold Databrick Certification and willing to join within 30 Days
Posted 1 week ago
7.0 - 12.0 years
18 - 25 Lacs
Bengaluru
Work from Office
JOB DESCRIPTION Role Expectations: Design, develop, and maintain robust, scalable, and efficient data pipelines Monitor data workflows and systems to ensure reliability and performance Identify and troubleshoot issues related to data flow and database performance Collaborate with cross-functional teams to understand business requirements and translate them into data solutions Continuously optimize existing data processes and architectures. Qualifications: Programming Languages: Proficient in Python and SQL Databases: Strong experience with Amazon Redshift, Aurora, and MySQL Data Engineering: Solid understanding of data warehousing concepts, ETL/ELT processes, and building scalable data pipelines Strong problem-solving and analytical skills Excellent communication and teamwork abilities
Posted 2 weeks ago
7.0 - 12.0 years
18 - 25 Lacs
Noida, Gurugram, Bengaluru
Work from Office
JOB DESCRIPTION Role Expectations: Design, develop, and maintain robust, scalable, and efficient data pipelines Monitor data workflows and systems to ensure reliability and performance Identify and troubleshoot issues related to data flow and database performance Collaborate with cross-functional teams to understand business requirements and translate them into data solutions Continuously optimize existing data processes and architectures. Qualifications: Programming Languages: Proficient in Python and SQL Databases: Strong experience with Amazon Redshift, Aurora, and MySQL Data Engineering: Solid understanding of data warehousing concepts, ETL/ELT processes, and building scalable data pipelines Strong problem-solving and analytical skills Excellent communication and teamwork abilities
Posted 2 weeks ago
4.0 - 9.0 years
15 - 25 Lacs
Hyderabad, Chennai
Work from Office
Interested can also apply with sanjeevan.natarajan@careernet.in Role & responsibilities Technical Leadership Lead a team of data engineers and developers; define technical strategy, best practices, and architecture for data platforms. End-to-End Solution Ownership Architect, develop, and manage scalable, secure, and high-performing data solutions on AWS and Databricks. Data Pipeline Strategy Oversee the design and development of robust data pipelines for ingestion, transformation, and storage of large-scale datasets. Data Governance & Quality Enforce data validation, lineage, and quality checks across the data lifecycle. Define standards for metadata, cataloging, and governance. Orchestration & Automation Design automated workflows using Airflow, Databricks Jobs/APIs, and other orchestration tools for end-to-end data operations. Cloud Cost & Performance Optimization Implement performance tuning strategies, cost optimization best practices, and efficient cluster configurations on AWS/Databricks. Security & Compliance Define and enforce data security standards, IAM policies, and compliance with industry-specific regulatory frameworks. Collaboration & Stakeholder Engagement Work closely with business users, analysts, and data scientists to translate requirements into scalable technical solutions. Migration Leadership Drive strategic data migrations from on-prem/legacy systems to cloud-native platforms with minimal risk and downtime. Mentorship & Growth Mentor junior engineers, contribute to talent development, and ensure continuous learning within the team. Preferred candidate profile Python , SQL , PySpark , Databricks , AWS (Mandatory) Leadership Experience in Data Engineering/Architecture Added Advantage: Experience in Life Sciences / Pharma
Posted 2 weeks ago
7.0 - 10.0 years
15 - 20 Lacs
Chennai
Work from Office
Job Title: Data Architect / Engagement Lead Location: Chennai Reports To: CEO About the Company: Ignitho Inc. is a leading AI and data engineering company with a global presence, including US, UK, India, and Costa Rica offices. Visit our website to learn more about our work and culture: www.ignitho.com. Ignitho is a portfolio company of Nuivio Ventures Inc ., a venture builder dedicated to developing Enterprise AI product companies across various domains, including AI, Data Engineering, and IoT. Learn more about Nuivio at: www.nuivio.com. Job Summary: As the Data Architect and Engagement Lead, you will define the data architecture strategy and lead client engagements , ensuring alignment between data solutions and business goals . This dual role blends technical leadership with client-facing responsibilities. Key Responsibilities: Design scalable data architectures, including storage, processing, and integration layers. Lead technical discovery and requirements gathering sessions with clients. Provide architectural oversight for data and AI solutions . Act as a liaison between technical teams and business stakeholders . Define data governance, security, and compliance standards . Required Qualifications: Bachelors or Masters in computer science, Information Systems, or similar. 7+ years of experience in data architecture, with client-facing experience. Deep knowledge of data modelling , cloud data platforms (Snowflake / BigQuery/ Redshift / Azure), and orchestration tools. Excellent communication, stakeholder management, and technical leadership skills. Familiarity with AI/ML systems and their data requirements is a strong plus.
Posted 2 weeks ago
3.0 - 6.0 years
9 - 18 Lacs
Bengaluru
Work from Office
3 +yrs of exp as Data Engineer Exp in AWS Cloud Services, EC2, S3, IAM Exp on AWS Glue, DMS, RDBMS, MPP Databases like Snowflake, Redshift Knowledge on Data Modelling, ETL Process This role will be 5 days WFO. Plz apply only if you are open to work from office Only immediate joiners required
Posted 2 weeks ago
4.0 - 8.0 years
9 - 11 Lacs
Hyderabad
Remote
Role: Data Engineer (ETL Processes, SSIS, AWS) Duration: Fulltime Location: Remote Working hours: 4:30am to 10:30am IST shift timings. Note: We need a ETL engineer for MS SQL Server Integration Service working in 4:30am to 10:30am IST shift timings. Roles & Responsibilities: Design, develop, and maintain ETL processes using SQL Server Integration Services (SSIS). Create and optimize complex SQL queries, stored procedures, and data transformation logic on Oracle and SQL Server databases. Build scalable and reliable data pipelines using AWS services (e.g., S3, Glue, Lambda, RDS, Redshift). Develop and maintain Linux shell scripts to automate data workflows and perform system-level tasks. Schedule, monitor, and troubleshoot batch jobs using tools like Control-M, AutoSys, or cron. Collaborate with stakeholders to understand data requirements and deliver high-quality integration solutions. Ensure data quality, consistency, and security across systems. Maintain detailed documentation of ETL processes, job flows, and technical specifications. Experience with job scheduling tools such as Control-M and/or AutoSys. Exposure to version control tools (e.g., Git) and CI/CD pipelines.
Posted 2 weeks ago
5.0 - 6.0 years
10 - 15 Lacs
Chennai, Bengaluru
Work from Office
AI/ML, AWS-based solutions. Amazon SageMaker, Python and ML libraries, data engineering on AWS, AI/ML algorithms &model deployment strategies.CI/CD, Cloud Formation, Terraform). AWS Certified Machine Learning. generative AI, real-time inference &edge
Posted 2 weeks ago
9.0 - 14.0 years
25 - 40 Lacs
Bengaluru
Hybrid
Greetings from tsworks Technologies India Pvt . We are hiring for Sr. Data Engineer - Snowflake with AWS If you are interested, please share your CV to mohan.kumar@tsworks.io Position: Senior Data Engineer Experience: 9+ Years Location: Bengaluru, India (Hybrid) Mandatory Required Qualification Strong proficiency in AWS data services such as S3 buckets, Glue and Glue Catalog, EMR, Athena, Redshift, DynamoDB, Quick Sight, etc. Strong hands-on experience building Data Lake-House solutions on Snowflake, and using features such as streams, tasks, dynamic tables, data masking, data exchange etc. Hands-on experience using scheduling tools such as Apache Airflow, DBT, AWS Step Functions and data governance products such as Collibra Expertise in DevOps and CI/CD implementation Excellent Communication Skills In This Role, You Will Design, implement, and manage scalable and efficient data architecture on the AWS cloud platform. Develop and maintain data pipelines for efficient data extraction, transformation, and loading (ETL) processes. Perform complex data transformations and processing using PySpark (AWS Glue, EMR or Databricks), Snowflake's data processing capabilities, or other relevant tools. Hands-on experience working with Data Lake solutions such as Apache Hudi, Delta Lake or Iceberg. Develop and maintain data models within Snowflake and related tools to support reporting, analytics, and business intelligence needs. Collaborate with cross-functional teams to understand data requirements and design appropriate data integration solutions. Integrate data from various sources, both internal and external, ensuring data quality and consistency. Skills & Knowledge Bachelor's degree in computer science, Engineering, or a related field. 9 + Years of experience in Information Technology, designing, developing and executing solutions. 4+ Years of hands-on experience in designing and executing data solutions on AWS and Snowflake cloud platforms as a Data Engineer. Strong proficiency in AWS services such as Glue, EMR, Athena, Databricks, with file formats such as Parquet and Avro. Hands-on experience in data modelling, batch and real-time pipelines, using Python, Java or JavaScript and experience working with Restful APIs are required. Hands-on experience in handling real-time data streams from Kafka or Kinesis is required. Expertise in DevOps and CI/CD implementation. Hands-on experience with SQL and NoSQL databases. Hands-on experience in data modelling, implementation, and management of OLTP and OLAP systems. Knowledge of data quality, governance, and security best practices. Familiarity with machine learning concepts and integration of ML pipelines into data workflows Hands-on experience working in an Agile setting. Is self-driven, naturally curious, and able to adapt to a fast-paced work environment. Can articulate, create, and maintain technical and non-technical documentation. AWS and Snowflake Certifications are preferred.
Posted 3 weeks ago
5.0 - 10.0 years
20 - 27 Lacs
Hyderabad
Work from Office
Position: Experienced Data Engineer We are seeking a skilled and experienced Data Engineer to join our fast-paced and innovative Data Science team. This role involves building and maintaining data pipelines across multiple cloud-based data platforms. Requirements: A minimum of 5 years of total experience, with at least 3-4 years specifically in Data Engineering on a cloud platform. Key Skills & Experience: Proficiency with AWS services such as Glue, Redshift, S3, Lambda, RDS , Amazon Aurora ,DynamoDB ,EMR, Athena, Data Pipeline , Batch Job. Strong expertise in: SQL and Python DBT and Snowflake OpenSearch, Apache NiFi, and Apache Kafka In-depth knowledge of ETL data patterns and Spark-based ETL pipelines. Advanced skills in infrastructure provisioning using Terraform and other Infrastructure-as-Code (IaC) tools. Hands-on experience with cloud-native delivery models, including PaaS, IaaS, and SaaS. Proficiency in Kubernetes, container orchestration, and CI/CD pipelines. Familiarity with GitHub Actions, GitLab, and other leading DevOps and CI/CD solutions. Experience with orchestration tools such as Apache Airflow and serverless/FaaS services. Exposure to NoSQL databases is a plus
Posted 3 weeks ago
7.0 - 12.0 years
10 - 20 Lacs
Hyderabad
Remote
Job Title: Senior Data Engineer Location: Remote Job Type: Fulltime Experience Level: 7+ years About the Role: We are seeking a highly skilled Senior Data Engineer to join our team in building a modern data platform on AWS. You will play a key role in transitioning from legacy systems to a scalable, cloud-native architecture using technologies like Apache Iceberg, AWS Glue, Redshift, and Atlan for governance. This role requires hands-on experience across both legacy (e.g., Siebel, Talend, Informatica) and modern data stacks. Responsibilities: Design, develop, and optimize data pipelines and ETL/ELT workflows on AWS. Migrate legacy data solutions (Siebel, Talend, Informatica) to modern AWS-native services. Implement and manage a data lake architecture using Apache Iceberg and AWS Glue. Work with Redshift for data warehousing solutions including performance tuning and modelling. Apply data quality and observability practices using Soda or similar tools. Ensure data governance and metadata management using Atlan (or other tools like Collibra, Alation). Collaborate with data architects, analysts, and business stakeholders to deliver robust data solutions. Build scalable, secure, and high-performing data platforms supporting both batch and real-time use cases. Participate in defining and enforcing data engineering best practices. Required Qualifications: 7+ years of experience in data engineering and data pipeline development. Strong expertise with AWS services, especially Redshift, Glue, S3, and Athena. Proven experience with Apache Iceberg or similar open table formats (like Delta Lake or Hudi). Experience with legacy tools like Siebel, Talend, and Informatica. Knowledge of data governance tools like Atlan, Collibra, or Alation. Experience implementing data quality checks using Soda or equivalent. Strong SQL and Python skills; familiarity with Spark is a plus. Solid understanding of data modeling, data warehousing, and big data architectures. Strong problem-solving skills and the ability to work in an Agile environment.
Posted 3 weeks ago
3.0 - 5.0 years
12 - 14 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Role & responsibilities Key Responsibilities: Design, develop, and maintain data pipelines and ETL workflows on AWS platform Work with AWS services like S3, Glue, Lambda, Redshift, EMR, and Athena for data ingestion, transformation, and analytics Collaborate with Data Scientists, Analysts, and Business teams to understand data requirements Optimize data workflows for performance, scalability, and reliability Troubleshoot data issues, monitor jobs, and ensure data quality and integrity Write efficient SQL queries and automate data processing tasks Implement data security and compliance best practices Maintain technical documentation and data pipeline monitoring dashboards Required Skills: 3 to 5 years of hands-on experience as a Data Engineer on AWS Cloud Strong expertise with AWS data services: S3, Glue, Redshift, Athena, EMR, Lambda Proficient in SQL , Python, or Scala for data processing and scripting Experience with ETL tools and frameworks on AWS Understanding of data warehousing concepts and architecture Familiarity with CI/CD for data pipelines is a plus Strong problem-solving and communication skills Ability to work in Agile environment and handle multiple priorities Preferred candidate profile
Posted 3 weeks ago
5.0 - 9.0 years
0 Lacs
Nagpur
Work from Office
Role & responsibilities Job Role- AWS Data Engineer(L3) Experience-7+ years Location-Nagpur 5+ years of microservices development experience in two of these: Python, Java, Scala 5+ years of experience building data pipelines, CICD pipelines, and fit for purpose data stores 5+ years of experience with Big Data Technologies: Apache Spark, Hadoop, or Kafka 3+ years of experience with Relational & Non-relational Databases: Postgres, MySQL, NoSQL (DynamoDB or MongoDB) 3+ years of experience working with data consumption patterns 3+ years of experience working with automated build and continuous integration systems 3+ years of experience working with data consumption patterns 2+ years of experience with search and analytics platforms: OpenSearch or ElasticSearch 2+ years of experience in Cloud technologies: AWS (Terraform, S3, EMR, EKS, EC2, Glue, Athena) Exposure to data-warehousing products: Snowflake or Redshift Exposure to Relation Data Modelling, Dimensional Data Modeling & NoSQL Data Modelling concepts.
Posted 3 weeks ago
12.0 - 22.0 years
25 - 40 Lacs
Bangalore Rural, Bengaluru
Work from Office
Role & responsibilities Requirements: Data Modeling (Conceptual, Logical, Physical)- Minimum 5 years Database Technologies (SQL Server, Oracle, PostgreSQL, NoSQL)- Minimum 5 years Cloud Platforms (AWS, Azure, GCP) - Minimum 3 Years ETL Tools (Informatica, Talend, Apache Nifi) - Minimum 3 Years Big Data Technologies (Hadoop, Spark, Kafka) - Minimum 5 Years Data Governance & Compliance (GDPR, HIPAA) - Minimum 3 years Master Data Management (MDM) - Minimum 3 years Data Warehousing (Snowflake, Redshift, BigQuery)- Minimum 3 years API Integration & Data Pipelines - Good to have. Performance Tuning & Optimization - Minimum 3 years business Intelligence (Power BI, Tableau)- Minimum 3 years Job Description: We are seeking experienced Data Architects to design and implement enterprise data solutions, ensuring data governance, quality, and advanced analytics capabilities. The ideal candidate will have expertise in defining data policies, managing metadata, and leading data migrations from legacy systems to Microsoft Fabric/DataBricks/ . Experience and deep knowledge about at least one of these 3 platforms is critical. Additionally, they will play a key role in identifying use cases for advanced analytics and developing machine learning models to drive business insights. Key Responsibilities: 1. Data Governance & Management Establish and maintain a Data Usage Hierarchy to ensure structured data access. Define data policies, standards, and governance frameworks to ensure consistency and compliance. Implement Data Quality Management practices to improve accuracy, completeness, and reliability. Oversee Metadata and Master Data Management (MDM) to enable seamless data integration across platforms. 2. Data Architecture & Migration Lead the migration of data systems from legacy infrastructure to Microsoft Fabric. Design scalable, high-performance data architectures that support business intelligence and analytics. Collaborate with IT and engineering teams to ensure efficient data pipeline development. 3. Advanced Analytics & Machine Learning Identify and define use cases for advanced analytics that align with business objectives. Design and develop machine learning models to drive data-driven decision-making. Work with data scientists to operationalize ML models and ensure real-world applicability. Required Qualifications: Proven experience as a Data Architect or similar role in data management and analytics. Strong knowledge of data governance frameworks, data quality management, and metadata management. Hands-on experience with Microsoft Fabric and data migration from legacy systems. Expertise in advanced analytics, machine learning models, and AI-driven insights. Familiarity with data modelling, ETL processes, and cloud-based data solutions (Azure, AWS, or GCP). Strong communication skills with the ability to translate complex data concepts into business insights. Preferred candidate profile Immediate Joiner
Posted 3 weeks ago
5 - 10 years
15 - 20 Lacs
Hyderabad
Work from Office
DBA Role: Expertise in writing and optimizing queries for performance, including but not limited to Redshift/Postgres/SQL/Big Query and using query plans. Expertise in database permissions, including but not limited to Redshift/BigQuery /Postgres/ SQL / Windows AD Knowledge of database design/ ability to work with data architects and other IT specialists to set up, maintain and monitor data networks/ storage / metrics. Expertise in backup and recovery, including AWS Redshift snapshot restores. Redshift (provisioned and serverless) configuration and creation. Redshift Workload Management, Redshift table statistics. Experience working with third party vendors, being able to coordinate with third parties and internal stakeholders to troubleshoot issues. Experience working with internal stakeholders and business partners on both long- and short-term projects related to efficiency, optimization and cost reduction. Expertise in database management best practices/ IT security best practices Experience with the following a plus: Harness Git Cloud watch Cloudablity Other monitoring dashboard configurations Experience with a variety of computer information systems Excellent attention to detail Problem-solving and critical thinking Ability to explain complex ideas in simple terms
Posted 4 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2