Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 8.0 years
9 - 19 Lacs
kolkata
Hybrid
Key Skills: Azure, Data Engineer Roles and Responsibilities: Design and implement scalable data pipelines and ETL/ELT processes using Azure Data Factory Develop and optimize relational and non-relational databases for performance, scalability, and reliability Build and manage data lakes and data warehouses using Azure Synapse Analytics , Data Lake Storage , and SQL-based solutions Collaborate with data scientists, analysts, and business stakeholders to deliver high-quality data solutions Ensure data governance, security, and compliance with industry standards Lead architectural discussions and mentor junior engineers Utilize strong proficiency in SQL and database design including Azure SQL , PostgreSQL , Oracle , MongoDB , and CosmosDB Apply deep understanding of data modeling , indexing , and performance tuning Leverage hands-on experience with Azure services such as: Azure Data Factory Synapse Analytics Data Lake Storage Azure SQL Azure Functions Azure Databricks Power BI Employ proficiency in Python and PySpark for data processing Manage large database migrations to Azure using Database Migration tools and Azure Data Studio Utilize Airflow for workflow management Work with big data tools including Spark , Kafka , and Delta Lake Familiarity with CI/CD pipelines and infrastructure-as-code using Terraform and ARM templates Demonstrate excellent problem-solving and communication skills Skills Required: Hands-on expertise in Azure Data Engineering services Strong SQL and database development background Proficient in Python and PySpark Experience with big data and streaming tools (Kafka, Spark) Knowledge of data governance and compliance frameworks Ability to manage migrations and optimize data architecture Strong communication, mentoring, and problem-solving abilities Education: Bachelor's degree in Computer Science, Information Technology, or a related field
Posted 3 weeks ago
5.0 - 8.0 years
8 - 18 Lacs
kolkata
Hybrid
Key Skills: AWS, Data Engineer Roles and Responsibilities: Architect and develop data pipelines using AWS services. Design and optimize relational and non-relational databases. Implement ETL/ELT processes using AWS Glue, Lambda, and other tools. Manage data lakes and warehouses (e.g., S3, Redshift, Athena). Collaborate with data scientists, analysts, and business stakeholders to deliver data solutions. Ensure data quality, security, and compliance with governance standards. Mentor junior engineers and contribute to architectural decisions. Hands-on experience with AWS services: AWS Glue, Redshift, S3, Lambda, EMR, Athena, Lake Formation. Proficiency in Python, PySpark for data processing. Experience with large database migration to AWS using Database Migration tools (DMS, SCT, etc.). Familiarity with CI/CD pipelines and infrastructure-as-code (Terraform, CloudFormation). Excellent problem-solving and communication skills. Experience with Airflow and Step Functions is a must. Skills Required: Expertise in AWS data services (Glue, Redshift, S3, Lambda, EMR, Athena) Strong SQL and database design knowledge (PostgreSQL, MySQL, Oracle) Proficient in Python and PySpark for ETL/ELT Experience with Airflow and AWS Step Functions Familiarity with AWS DMS, SCT for data migration CI/CD pipeline and IaC tools experience (Terraform, CloudFormation) Strong communication and stakeholder collaboration skills Education: Bachelor's Degree in related field
Posted 3 weeks ago
5.0 - 10.0 years
9 - 18 Lacs
kolkata
Hybrid
Key Skills: PostgreSQL, PostgreSQL database, Data Engineer, Data Engineering Roles and Responsibilities: Design, install, configure, and maintain PostgreSQL database systems. Perform database tuning, optimization, and capacity planning. Implement backup and recovery strategies, including disaster recovery planning. Monitor database performance and proactively address issues. Ensure database security, compliance, and access control. Collaborate with development and infrastructure teams to support application requirements. Automate routine tasks and improve operational efficiency. Lead database migration and upgrade projects. Skills Required: Strong expertise in PostgreSQL database administration and engineering Advanced knowledge in performance tuning, indexing, and optimization techniques Experience with backup, restore, and disaster recovery strategies Proficiency in scripting and automation for DB operations Understanding of database security, access control, and compliance Hands-on experience with monitoring tools and high-availability setups Strong collaboration and troubleshooting skills with cross-functional teams Education: Bachelor's Degree in related field
Posted 3 weeks ago
3.0 - 7.0 years
0 Lacs
punjab
On-site
Americana Restaurants International PLC is a pioneering force in the MENA region and Kazakhstan's Out-of-Home Dining industry, and ranks among the world's leading operators of Quick Service Restaurants (QSR) and casual dining establishments. With an extensive portfolio of iconic global brands and a dominant regional presence, we have consistently driven growth and innovation for over 60 years. Our expansive network of 2,600+ restaurants spans 12 countries throughout the Middle East, North Africa, and Kazakhstan from Kazakhstan in the east to Morocco in the west powered by a team of over 40,000+ talented individuals committed to delivering exceptional food, superior service, and memorable experiences. In line with our vision for innovation and operational excellence, we have established our Center of Excellence in Mohali, India. This facility plays a pivotal role in product development, IT, Digital, AI, and Analytics, as well as in implementing global IT best practices. Serving as a strategic hub, it is integral to strengthening our technology backbone and driving digital transformation across our worldwide operations. Your Impact In this role, you will lead and inspire a high-performing technical support team, ensuring every customer interaction is handled efficiently, professionally, and with care. Your Role Will Include: We are looking for a skilled Data Engineer to join our growing team. As a Data Engineer, you will be responsible for designing, developing, and maintaining scalable data pipelines and infrastructure to support the extraction, transformation, and loading of data into our data warehouse and other data repositories. You will collaborate closely with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions that enable data-driven decision-making. Responsibilities: Data Pipeline Development: - Design, build, and maintain scalable and robust data pipelines for ingesting, transforming, and storing large volumes of data from various sources. - Implement ETL (Extract, Transform, Load) processes to ensure data quality and reliability. - Optimize data pipelines for performance, reliability, and cost-effectiveness. Data Modeling and Architecture: - Design and implement data models and schemas to support business requirements. - Work closely with data architects to ensure alignment with overall data architecture and standards. - Implement and maintain data warehouse solutions and data lakes. Data Integration and API Development: - Integrate data from multiple sources and third-party APIs. - Develop and maintain RESTful APIs for data access and integration. Data Quality and Governance: - Implement data quality checks and monitoring processes to ensure data accuracy, completeness, and consistency. - Define and enforce data governance policies and best practices. Performance Tuning and Optimization: - Monitor and optimize data storage and query performance. - Identify and resolve performance bottlenecks in data pipelines and databases. Collaboration and Documentation: - Collaborate with cross-functional teams including data scientists, analysts, and software engineers to understand data requirements and deliver solutions. - Document data pipelines, data flows, and data models. - Provide technical guidance and support to junior team members. What You Bring - Bachelors degree in computer science, Engineering, or a related field, or equivalent work experience. - Proven experience as a Data Engineer or similar role, with a strong understanding of data management and integration techniques. - Hands-on experience with big data technologies and frameworks such as Hadoop, Spark, Confluent, Kafka, Data Lake, PosgreSQL, Data Factory etc. - Proficiency in programming languages such as Python, Scala, or Java for data manipulation and transformation. - Experience with cloud platforms and services (e.g., Confluent, Azure, Google Cloud). - Solid understanding of relational databases, SQL, and NoSQL databases. - Familiarity with data warehousing concepts and technologies (e.g., Redshift, Snowflake, Big Query). - Strong analytical and problem-solving skills. - Excellent communication and collaboration skills. - Ability to work effectively in a fast-paced environment and manage multiple priorities. Preferred Qualifications: - Masters degree in data science, Computer Science, or a related field. - 5+ years of experience in software development, with a focus on full stack web development using Java technologies. - Experience with containerization and orchestration tools such as Docker and Kubernetes. - Knowledge of machine learning and data analytics techniques. - Experience with data streaming technologies (e.g., Apache Kafka, Kinesis). - Familiarity with DevOps practices and tools.,
Posted 3 weeks ago
7.0 - 11.0 years
0 Lacs
chennai, tamil nadu
On-site
As a senior Data Engineer at Papigen, you will play a crucial role in leading the data and dashboard setup for a centralized sovereign and country risk DataMart. Your responsibilities will include designing and implementing modern cloud-based data solutions, collaborating with global teams, and ensuring the delivery of high-impact analytical and reporting capabilities. You will lead the data analysis, data repository design, and data architecture for the centralized data platform. Collaborating closely with business teams, you will gather requirements, document data flows, and identify value-driven data solutions. Your expertise in designing ETL workflows and building scalable, high-performance data solutions will be essential for optimizing data repositories for analytics and reporting using Azure Data Lake and related Azure services. In this role, you will leverage your hands-on experience with SQL, Oracle, and Azure Data Lake to enhance insights using Large Language Models (LLMs), modern AI capabilities, and data visualization/reporting tools. Maintaining metadata management and data lineage, you will bridge technical and business teams to ensure the delivery of high-quality, maintainable solutions. To excel in this position, you should have 7+ years of experience in data engineering with expertise in data modeling and ETL design. Your proficiency in Python, ETL tools, and familiarity with big data technologies like Spark and Kafka will be crucial. A background in Capital Markets, Credit Risk Management, or Financial Risk Management is highly preferred, along with strong collaboration skills to work effectively in multi-location, multi-vendor teams. By staying updated on data engineering best practices, cloud technologies, and emerging AI trends, you will contribute to the continuous improvement and innovation within the organization. If you are passionate about leveraging your data engineering skills to drive impactful solutions and foster business growth, we welcome you to join our dynamic team at Papigen.,
Posted 3 weeks ago
5.0 - 10.0 years
12 - 17 Lacs
hyderabad, bengaluru
Work from Office
Skills ADF, Azure Data Lake , Databrick, PySpark, SQL, Python. Need Immediate Joiners Contact Tanisha -9899025091 tanisha.batra@wsneconsulting.com Vrinda-9625997927 Vrinda.sahni@wsneconsulting.com Ishita -92891 17976 ishita.sarkar@wsneconsulting.com
Posted 3 weeks ago
3.0 - 6.0 years
14 Lacs
hyderabad
Work from Office
Candidate Specification: Notice Period - Immediate joiner, 5 days 3+ years experience in Data Engineering Candidate with coding experience in Python Experience in Data brick and Pyspark. Candidate with good communication and coding knowledge. Contact Person - Deepikad Email id- deepikad@gojobs.biz
Posted 3 weeks ago
6.0 - 8.0 years
5 - 9 Lacs
pune, chennai, bengaluru
Hybrid
Role & responsibilities SQL Expertise Strong hands-on experience with DDL, DML, query optimization, and performance tuning . Programming Languages – Proficiency in Python or Java for data processing and automation. Data Modelling – Good understanding of entity-relationship modelling, star schema, and Kimball methodology . Cloud Data Engineering – Hands-on experience with Azure Synapse, Azure Data Factory, Azure Data Lake, Databricks and Snowflake ETL Development – Experience building scalable ETL/ELT pipelines and data ingestion workflows. Ability to learn and apply Snowflake concepts as needed. Communication Skills : Strong presentation and communication skills to engage both technical and business stakeholders in strategic discussions. Financial Services Domain (Optional) : Knowledge of financial services.
Posted 3 weeks ago
4.0 - 7.0 years
15 - 30 Lacs
bengaluru
Work from Office
Key Responsibilities Design, build, and manage scalable, reliable, and secure ETL/ELT data pipelines using tools such as Apache Spark, Apache Flink , Airflow, and Databricks. Develop and maintain data architecture, ensuring efficient data modeling, warehousing, and data flow across systems. Collaborate with data scientists, analysts, and business teams to understand data requirements and implement robust solutions. Work with cloud platforms (AWS, Azure, or GCP) to build and optimize data lake and data warehouse environments (e.g., Redshift, Snowflake, BigQuery ). Implement CI/CD pipelines for data infrastructure using tools such as Jenkins, Git, Terraform, and related DevOps tools. Apply data quality and governance best practices to ensure accuracy, completeness, and consistency of data. Monitor data pipelines, diagnose issues, and ensure data availability and performance. Requirements 5- 8 years of proven experience in data engineering or related roles. Strong programming skills in Python (including PySpark ) and SQL. Experience with big data technologies such as Apache Spark, Apache Flink , Hadoop, Hive, and HBase. Proficient in building data pipelines using orchestration tools like Apache Airflow. Hands-on experience with at least one major cloud platform (AWS, Azure, GCP), including services like S3, ADLS, Redshift, Snowflake, or BigQuery . Experience with data modeling, data warehousing, and real-time/batch data processing. Familiarity with CI/CD practices, Git, and Terraform or similar infrastructure-as-code tools. Ability to design for scalability, maintainability, and high availability. Preferred Qualifications Bachelors degree in Information Technology, Computer Information Systems, Computer Engineering, Computer Science Certifications in cloud platforms (e.g., AWS Certified Data Analytics, Google Professional Data Engineer). Experience with containerization tools such as Docker and orchestration with Kubernetes. Workflows automation Familiarity with machine learning pipelines and serving infrastructure. Experience in implementing data governance, data lineage, and metadata management practices. Exposure to modern data stack tools like dbt , Kafka, or Fivetran .
Posted 3 weeks ago
5.0 - 7.0 years
0 - 0 Lacs
mumbai, navi mumbai, mumbai (all areas)
Hybrid
Dear Candidates, Were Hiring :ETL Developer/Data Engineer Experience: 5.1 to 7 years Location- Mumbai (Hybrid) Notice Period: Immediate to 20 Days Interview Process: 2- 3 Rounds ( 2nd Round will be Face to Face-Mandatory ) Job description Develop and optimize ETL processes using Python, Oracle SQL, and any other ETL tools S hould have SSIS excellent working knowledge of SQL / PL-SQL Oracle and MS SQL DB Good to have conceptual knowledge on Machinelearning / ArtificialIntelligence . Knowledge and understanding of Agile development practices Behavioral: Structured, organized and a good communicator Willing to share knowledge and skills with other developers within the team Delivery-focused with a good eye for detail Exhibits positive interpersonal and team skills Must be able to work closely with business analysts located in Paris Must be able to work independently, should be a true team player Follow my LinkedIn page for job openings : www.linkedin.com/in/shaki-hameed-b0181030a
Posted 3 weeks ago
2.0 - 4.0 years
3 - 8 Lacs
bengaluru
Work from Office
a . 2 + years experience in bigdata technologies like pyspark , hadoop , trino , druid b. Strong experience on query optimisation in trino /pyspark c. Strong hands on in Airflow / scheduler d. Expertise in python Role & responsibilities Preferred candidate profile Data Engineer Responsibilities: About the Role: H ands-on Data Engineers to build and maintain scalable data solutions and services . The role includes : a. Maintain , develop data engineering pipelines to ensure seamless data flow for BI applications b. Create data models to ensure seamless query system c. Develop or onboard opensource tools to make the data platform up to date d. Optimize queries and scripts over large-scale datasets (TBs) with a focus on performance and cost-efficiency • e. Implement data governance and security best practices in kubernetes environments f. Collaborate across teams to translate business requirements into robust technical solutions
Posted 3 weeks ago
3.0 - 8.0 years
8 - 18 Lacs
gurugram, bengaluru
Hybrid
Role & Responsibilities Excellent problem-solving skills with the ability to analyze business problems systematically and deliver effective right-sized solutions in a timely manner. Proven analysis and design/development skills on AWS platforms. Develop, Test, and Implement the changes following the Software Development Life Cycle methodologies and quality concepts. Technical Skills Strong knowledge on AWS Glue/Python/PySpark. Understanding of AWS data/automation services Step functions, Lambda, SNS, SQS. Hands-on experience in ETL projects. Good knowledge of data warehousing concepts. Good analytical and problem-solving skills. Experience in GIT code versioning tools. Strong knowledge of relational database (Oracle/PostgreSQL). Good understanding of Software Development Life Cycle. Nice to have Working knowledge on job scheduling tools like Autosys etc. Experience in working in an Agile environment. Certification in AWS. Experience in PowerShell/Batch scripts. Good organizational skills with the ability to handle several tasks efficiently. Education & Experience Bachelor's or Master's degree in Computer science, IT or related technical field. Minimum 3 to 8 years of overall IT experience. Minimum 3+ years of experience on AWS Glue/Python/PySpark Strong knowledge of AWS Glue/PySpark, relational databases (Oracle/PostgreSQL, SQL Server). Good understanding of AWS data/automation services Step functions, Lambda, SNS, SQS. Demonstrate excellent analytical and logical thinking.
Posted 3 weeks ago
9.0 - 14.0 years
22 - 37 Lacs
pune, chennai, bengaluru
Hybrid
Role & responsibilities We are looking for a Senior Data Engineer with strong expertise in SQL, Python, Azure Synapse, Azure Data Factory, Snowflake, and Databricks . The ideal candidate should have a solid understanding of SQL (DDL, DML, query optimization) and ETL pipelines while demonstrating a learning mindset to adapt to evolving technologies. Key Responsibilities: Collaborate with business and IT stakeholders to define business and functional requirements for data solutions. Design and implement scalable ETL/ELT pipelines using Azure Data Factory, Databricks, and Snowflake . Develop detailed technical designs, data flow diagrams, and future-state data architecture . Evangelize modern data modelling practices , including entity-relationship models, star schema, and Kimball methodology . Ensure data governance, quality, and validation by working closely with quality engineering teams . Write, optimize, and troubleshoot complex SQL queries , including DDL, DML, and performance tuning . Work with Azure Synapse, Azure Data Lake, and Snowflake for large-scale data processing . Implement DevOps and CI/CD best practices for automated data pipeline deployments. Support real-time streaming data processing with Spark, Kafka, or similar technologies . Provide technical mentorship and guide team members on best practices in SQL, ETL, and cloud data solutions . Stay up to date with emerging cloud and data engineering technologies and demonstrate a continuous learning mindset . Required Skills & Qualifications: Primary Requirements: SQL Expertise Strong hands-on experience with DDL, DML, query optimization, and performance tuning . Programming Languages – Proficiency in Python or Java for data processing and automation. Data Modelling – Good understanding of entity-relationship modelling, star schema, and Kimball methodology . Cloud Data Engineering – Hands-on experience with Azure Synapse, Azure Data Factory, Azure Data Lake, Databricks and Snowflake ETL Development – Experience building scalable ETL/ELT pipelines and data ingestion workflows. Ability to learn and apply Snowflake concepts as needed. Communication Skills : Strong presentation and communication skills to engage both technical and business stakeholders in strategic discussions. Financial Services Domain (Optional) : Knowledge of financial services. Good to Have Skills: DevOps & CI/CD – Experience with Git, Jenkins, Docker, and automated deployments . Streaming Data Processing – Experience with Spark, Kafka, or real-time event-driven architectures . Data Governance & Security – Understanding of data security, compliance, and governance frameworks . Experience in AWS – Knowledge of AWS cloud data solutions (Glue, Redshift, Athena, etc.) is a plus.
Posted 3 weeks ago
5.0 - 10.0 years
5 - 11 Lacs
noida, hyderabad
Work from Office
Job Description: 5+ years of overall IT experience, which includes hands on experience in Big Data technologies. Mandatory - Hands on experience in Python and PySpark. Build pySpark applications using Spark Dataframes in Python . Worked on optimizing spark jobs that processes huge volumes of data. Hands on experience in version control tools like Git . Worked on Amazons Analytics services like Amazon EMR, Amazon Athena, AWS Glue. Worked on Amazons Compute services like Amazon Lambda, Amazon EC2 and Amazon Storage service like S3 and few other services like SNS. Good to have knowledge of data warehousing concepts dimensions, facts, schemas-snowflake, star etc. Have worked with columnar storage formats- Parquet,Avro,ORC etc. Well versed with compression techniques Snappy, Gzip. Good to have knowledge of AWS databases (atleast one) Aurora, RDS, Redshift, ElastiCache, DynamoDB. Bigdata, AWS, python, Pyspark, AWS services - IAM, lambda, EMR, glue
Posted 3 weeks ago
8.0 - 13.0 years
16 - 25 Lacs
bengaluru
Remote
Must-Have Skills: Strong experience with Google BigQuery data modeling, query optimization, performance tuning. Proficient in building and managing data pipelines and ETL/ELT workflows. Solid SQL skills and experience working with large datasets. Experience with Looker creating/modifying dashboards and LookML understanding. Experience with version control (e.g., Git) and CI/CD for data solutions. Ability to work in Agile environments and with remote teams. Good-to-Have Skills: Exposure to GCP services beyond BigQuery (e.g., Dataflow, Cloud Functions).
Posted 3 weeks ago
2.0 - 6.0 years
8 - 12 Lacs
bengaluru
Work from Office
Role & responsibilities Develop and maintain scalable ETL/ELT pipelines using Databricks (PySpark, Delta Lake). Design and optimize data models in AWS Redshift for performance and scalability. Manage Redshift clusters and EC2-based deployments, ensuring reliability and cost efficiency. Integrate data from diverse sources (structured/unstructured) into centralized data platforms. Implement data quality checks, monitoring, and logging across pipelines. Collaborate with data scientists, analysts, and business stakeholders to deliver high-quality datasets. Required Skills & Experience: 36 years of experience in data engineering. Strong expertise in Databricks (Spark, Delta Lake, notebooks, job orchestration). Hands-on experience with AWS Redshift (cluster management, performance tuning, workload optimization). Proficiency with AWS EC2, S3, and related AWS services. Strong SQL and Python skills. Experience with CI/CD and version control (Git). Preferred candidate profile We are seeking a skilled Data Engineer with hands-on experience in Databricks and AWS Redshift (including EC2 deployments) to design, build, and optimize data pipelines that support analytics and business intelligence initiatives.
Posted 3 weeks ago
9.0 - 14.0 years
22 - 37 Lacs
noida, delhi / ncr
Hybrid
Primary Responsibilities: Collaborate with cross-functional teams to understand business requirements and provide database solutions that support company objectives Work with software design and development teams in designing database architectures and data modelling, building database schemas, tables, procedures, and views Work with the system administrators for hardware and software installations and configurations Administer, monitor, and maintain Postgres SQL databases, ensuring their integrity, security, and high performance Design, implement, and maintain the database architecture, including high-availability and disaster recovery solutions Manage database environments within the cloud platforms Participate in database testing and quality assurance processes to validate database changes and updates Analyze and tune Postgres SQL database for optimal efficiency by Identifying and addressing performance bottlenecks, implementing query optimization, and fine-tuning database configurations to achieve optimal performance Analyze and sustain capacity and performance requirements, including effective use of indexes, enabling parallel query execution, and other DBMS features, such as query store Create and manage efficient database indexes to enhance query performance and reduce database access times Maintain thorough documentation of database configurations, procedures, and best practices Add and remove users, administer roles and permissions, audit, and check for security problems Required Qualifications: 8+ years of experience as a Postgres SQL Server Data Engineer with experience in design, implementation, and management 5+ years of strong experience with tools like Terraform, Kubernetes, Docker, Packer, 5+ Years of hands on experience with CI/CD tools ( Github action, Jenkins etc ) 4+ years proven experience managing Databases on cloud platforms 3+ years of experience PII Personally Identifiable Information or PHI Personal Health Information 5+ years of experience with change management systems, such as ServiceNow and Jira 5+ years of experience with SQL waits, locking, blocking and resource contention Preferred Qualifications: Bachelors degree in Computer Science or related field Cloud Platforms:Familiarity with AWS, Azure, or GCP services. Monitoring & Logging: Knowledge of monitoring tools (e.g., Prometheus, Grafana, Datadog) and logging tools (e.g., ELK Stack). Version Control: Experience with Git and branching strategies. Infrastructure as Code: Proficiency in tools like Terraform and Ansible. System Implementation & Integration: Proven experience in system implementation and integration projects. Consulting Skills: Ability to consult with clients and stakeholders to understand their needs and provide expert advice.
Posted 3 weeks ago
5.0 - 10.0 years
9 - 12 Lacs
pune
Work from Office
Hiring for a leading MNC for position of Data Engineer , based at Kharadi (Pune) Designation : Data Engineer Shift Timing : 12 PM to 9 PM (Cab Facility Provided) Work Mode: Work from Office Key Responsibilities: - Liaise with stakeholders to define data requirements - Manage Snowflake & SQL databases - Build and optimize semantic models for reporting - Lead modern data architecture adoption - Reverse engineer complex data structures - Mentor peers on data governance best practices - Champion Agile/SCRUM methodologies Preferred Candidates: Experience- 5+ years in data engineering/BI roles - Strong ETL, data modelling, governance, and lineage documentation - Expertise in Snowflake, Azure (SQL Server, Data Factory, Logic Apps, App Services), Power BI - Advanced SQL & Python (OOP, JSON/XML) - Experience with medallion architecture, Fivetran, DBT - Application development using Python, Streamlit, Flask, Node.js, Power Apps - Agile/Scrum project management - Bachelors/Masters in Math, Stats, CS, IT, or Engineering
Posted 3 weeks ago
6.0 - 10.0 years
14 - 22 Lacs
hyderabad, pune, bengaluru
Hybrid
We seek a Senior-Level AWS Data Engineer who shares our passion for innovation and change. This role is critical to helping our business partners evolve and adapt to consumers' personalized expectations in this new technological era. Looking at highly skilled and motivated Data Engineer to join our dynamic team. The ideal candidate will have a strong background in designing, developing, and managing data pipelines, working with cloud technologies, and optimizing data workflows. You will play a key role in supporting our data-driven initiatives and ensuring the seamless integration and analysis of large datasets. What will help you succeed: Fluent English Python, PySpark, SparkSQL, and SQL. AWS data services, including S3, S3 tables, Glue, EMR, EC2, Athena, Redshift, step functions and Lambda Functions. Design Scalable Data Models: Develop and maintain conceptual, logical, and physical data models for structured and semi-structured data in AWS environments. Optimize Data Pipelines: Work closely with data engineers to align data models with AWS-native data pipeline design and ETL best practices. AWS Cloud Data Services: Design and implement data solutions leveraging AWS Redshift, Athena, Glue, S3, Lake Formation, and AWS-native ETL workflows. Design, develop, and maintain scalable data pipelines and ETL processes using AWS services (Glue, Lambda, RedShift). Write efficient, reusable, and maintainable Python and PySpark scripts for data processing and transformation. Optimize SQL queries for performance and scalability. Expertise in writing complex SQL queries and optimizing them for performance. Monitor, troubleshoot, and improve data pipelines for reliability and performance. Focusing on ETL automation using Python and PySpark, responsible for design, build, and maintain efficient data pipelines, ensuring data quality and integrity for various applications. This job can be filled in Pune, Bangalore, Hyderabad locations
Posted 3 weeks ago
5.0 - 10.0 years
6 - 16 Lacs
bangalore rural, bengaluru
Work from Office
Job Summary: We are seeking a highly skilled and experienced Data Modeler to join our data engineering team. The ideal candidate will bring deep expertise in designing scalable and efficient data models for cloud platforms, particularly with a strong background in Oracle Data Warehousing and Databricks Lakehouse architecture. You will play a critical role in our strategic migration from an on-prem Oracle data warehouse to a modern cloud-based Databricks platform. Required Skills & Experience: 5+ years of hands-on experience in data modeling, including conceptual, logical, and physical design. Proven experience migrating large-scale Oracle DWH environments to Databricks Lakehouse or similar platforms. Strong expertise in Oracle database schemas, PL/SQL, and performance tuning. Proficiency in Databricks, Delta Lake, Spark SQL, and DataFrame APIs. Experience designing models optimized for cloud platforms (preferably AWS or Azure). Deep knowledge of dimensional modeling techniques (Star/Snowflake). Familiarity with tools and practices for metadata management, data lineage, and governance. Strong analytical and communication skills with the ability to work collaboratively in Agile teams. Ability to document and communicate data model designs to both technical and non-technical stakeholders.
Posted 3 weeks ago
0.0 years
3 - 4 Lacs
gurugram
Work from Office
We are looking for a motivated and enthusiastic Trainee Data Engineer to join our Engineering team. This is an excellent opportunity for recent graduates to start their career in data engineering, work with modern technologies, and learn from experienced professionals. The candidate should be eager to learn, curious about data, and willing to contribute to building scalable and reliable data systems. Responsibilities: Understand and align with the values and vision of the organization. Adhere to all company policies and procedures. Support in developing and maintaining data pipelines under supervision. Assist in handling data ingestion, processing, and storage tasks. Learn and contribute to database management and basic data modeling. Collaborate with team members to understand project requirements. Document assigned tasks, processes, and workflows. Stay proactive in learning new tools, technologies, and best practices in data engineering. Required Candidate profile: Bachelor's degree in Computer Science, Information Technology, or related field. Fresh graduates or candidates with up to 1 year of experience are eligible. Apply Link - https://leewayhertz.zohorecruit.in/jobs/Careers/32567000019403095/Trainee-Data-Engineer?source=CareerSite LeewayHertz is an equal opportunity employer and does not discriminate based on race, color, religion, sex, age, disability, national origin, sexual orientation, gender identity, or any other protected status. We encourage a diverse range of applicants.
Posted 3 weeks ago
18.0 - 25.0 years
15 - 30 Lacs
hyderabad
Work from Office
Greetings from Technogen !!! We thank you for taking time about your competencies and skills, while allowing us an opportunity to explain about us and our Technogen , we understand that your experience and expertise are relevant the current open with our clients. About Technogen : https://technogenindia.com/ Technogen India Pvt. Ltd. is a boutique Talent & IT Solutions company, founded in 2008, has been serving global customers for over last 2 decades,. Talent Solutions: We assist several GCCs, Global MNCs and IT majors on their critical and unique IT talent needs through our services around Recruitment Process Outsourcing (RPO), contract staffing, permanent hiring, Hire-Train-Deploy (HTD), Build-Operate-Transfer (BOT) and Offshore staffing. Job Title : Data Engineer Required Experience : 8 years Work Mode: WFO-4 Days from Office. Shift Time : UK Shift Time-12:00 PM IST to 09:00 PM IST. Location : Hyderabad. Job Summary:- The Opportunity seeking a Data Engineer and Problem Manager , based out of our Technology & Innovation Center in Hyderabad, India, reporting to the IT Director for Enterprise Data and Analytics. The person in this role will be responsible for managing, monitoring, and maintaining scalable data integration and analytics pipelines to support enterprise reporting and data-driven decision-making. This role requires close collaboration with cross-functional teams to integrate data from various source systems into a centralized, cloud-based data warehouse Primarily leveraging tools such as Google Big Query, Python, SQL, DBT, and Cloud Composer (Airflow). The Data Engineer will also be responsible for implementing data quality checks, managing orchestration workflows, and delivering business-ready datasets aligned with enterprise data strategy. What Your Impact Will Be:- Experience in Incident Management + Problem Management + RCA activity ITIL Certification will be preferred Experience in O2C, R2P business processes Monitoring and analysing data integration pipelines to ingest structured and semi-structured data from enterprise systems (e.g., ERP, CRM, E-commerce, Order Management) into a centralized cloud data warehouse using Google BigQuery. Build analytics-ready pipelines that transform raw data into trusted, curated datasets for reporting, dashboards, and advanced analytics. Implement transformation logic using DBT to create modular, maintainable, and reusable data models that evolve with business needs. Apply BigQuery best practicesincluding partitioning, clustering, and query optimizationto ensure high performance and scalability. Automate data workflows using Cloud Composer (Airflow), ensuring reliable execution, task dependency management, and timely data delivery. Develop efficient, reusable Python and SQL code for data ingestion, transformation, validation, and performance tuning across the pipeline lifecycle. Establish robust data quality checks and testing strategies to validate both technical accuracy and alignment with business logic. Collaborate with cross-functional teamsincluding data analysts, BI developers, and product owners—to understand integration needs and deliver impactful, business-aligned data solutions. Leverage modern ETL platforms such as Ascend.io, Databricks, Dataflow, or Fivetran to accelerate development and improve observability and orchestration. Contribute to technical documentation, CI/CD workflows, and monitoring processes to drive transparency, reliability, and continuous improvement across the data engineering ecosystem. What We’re Looking For:- Bachelor's or master's degree in computer science, Data Engineering, Information Systems, or related technical field. 8+ years of hands-on experience in data engineering with a focus on data integrations, warehousing, and analytics pipelines. Hands on experience in troubleshooting and diagnosis problem and root cause finding and communicate to development team Techno Functional knowledge in ERP application integration in O2C and R2P area Hands-on experience with:- Google Big Query as a centralized data warehousing and analytics platform. Python for scripting, data processing, and integration logic. SQL for data transformation, complex querying, and performance tuning. DBT for building modular, maintainable, and reusable transformation models. Airflow / Cloud Composer for orchestration, dependency management, and job scheduling. Solid understanding of ITIL Incident management and problem management. Strong knowledge of data testing frameworks, validation methods, and best practices. Preferred Skills (Optional):- Experience with Ascend.io or comparable ETL platforms such as Databricks, Dataflow, or Fivetran. Familiarity with data cataloging and governance tools like Collibra. Knowledge of CI/CD practices, Git-based workflows, and infrastructure automation tools. Exposure to event-driven or real-time streaming pipelines using tools like Pub/Sub or Kafka. Strong problem-solving and analytical mindset with the ability to think broadly and identify innovative solutions and able to quick to learn new technologies, programming languages, and frameworks. Excellent communication skills, both written and verbal. Ability to work in a fast-paced and collaborative environment. Good experience in Agile Methodologies like Scrum, Kanban, and managing IT backlogs. What It’s Like to Work Here:- We are a purpose-driven company aiming to empower the next generation to explore the wonder of childhood and reach their full potential. We live up to our purpose employing the following behaviors: We collaborate: Being a part of means being part of one team with shared values and common goals. Every person counts and working closely together always brings better results. Partnership is our process, and our collective capabilities are our superpowers. We innovate: At we always aim to find new and better ways to create innovative products and experiences. No matter where you work in the organization, you can always make a difference and have a real impact. We welcome new ideas and value new initiatives that challenge conventional thinking. We execute: We are a performance driven company. We strive for excellence and are focused on pursuing best in class outcomes. We believe in accountability and ownership and know that our people are at their best when they are empowered to create and deliver results.
Posted 3 weeks ago
5.0 - 10.0 years
15 - 25 Lacs
mumbai, delhi / ncr, bengaluru
Work from Office
Tech Stack - AWS Big Data Stack Expertise in ETL, SQL, Python and AWS tools like Redshift,S3, Glue, Data pipeline, Lambda is a must. Good to have knowledge on Glue Workflows, Step Functions, Quick sight, Athena, Terraform and Dockers. Responsibilities -Assists in the analysis, design and development of a roadmap, design pattern, and implementation based upon a current vs. future state from a architecture viewpoint. Participates in the data related technical and business discussions relative to future serverless architecture. Responsible for working with our Enterprise customers and migrate data into Cloud. Set up scalable ETL process to move data into Cloud warehouse. Deep understanding in Data Warehousing, Dimensional Modelling, ETL Architect, Data Conversion/Transformation, Database Design, Data Warehouse Optimization, Data Mart Development etc. Location-Remote, Delhi NCR, Bangalore, Chennai, Pune, Kolkata, Ahmedabad, Mumbai, Hyderabad
Posted 3 weeks ago
3.0 - 5.0 years
6 - 10 Lacs
bangalore rural, bengaluru
Work from Office
Job Description Experience: 4+ years Required Skills: Use SQL to query databases, extract and manipulate data for reporting purposes, and perform complex joins and aggregations to generate insights related to growth metrics. Strong in Python Proficient in ETL (Extract, Transform, Load) processes, data warehousing solutions (Databricks), and big data technologies (e.g., Hadoop, Spark). Proficient in Structured Streaming and Delta file structure Must Have experience in Data streaming or Kafka Note: Interested Candidate Can send their resume to Mail: jyotiprakash@mirafra.com
Posted 3 weeks ago
5.0 - 9.0 years
20 - 32 Lacs
pune, chennai, bengaluru
Hybrid
Interested please share your CV and below filled details to snidafazli@altimetrik.com Name(as per aadhar card): Number: EmailID: Current CTC: Fixed CTC: Expected CTC: holding any offers: Current Company: Payroll Company: Notice PEriod: Mention exact LWD: Current Location: Preferred Location: Total Experience: Relevant Experience please mention in years below, Python: Git: GenAI: MLops:
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |