Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 8.0 years
0 Lacs
navi mumbai, maharashtra
On-site
As an Azure API Developer in a senior developer delivery role based in Mumbai, Pune, or Bangalore, you will be responsible for implementing and managing Azure API solutions. Your key skill set should include proficiency in Azure API management, Azure functions, Azure storage, security, infrastructure as code, clusters, jobs, containers, workspaces, Python programming, RESTful API design, OAuth, Swagger, CI/CD engineering with Azure DevOps, Docker, Kubernetes, Spark programming (PySpark), SQL, and Git. Additionally, strong presentation and communication skills when interacting with stakeholders are essential. It would be beneficial to have experience with Databricks Workspace, notebooks, log analytics, troubleshooting APIs, GraphQL, working effectively within a multi-team environment, documenting APIs and data workflows, and possessing Agile knowledge. Your primary focus will be on developing, implementing, and maintaining Azure API solutions while collaborating with various teams to ensure successful project delivery. Your expertise in Azure services, programming languages, and CI/CD practices will play a crucial role in the organization's technology infrastructure. If you are passionate about Azure development, have a strong technical background, and enjoy working in a dynamic environment, this role offers an exciting opportunity to contribute to cutting-edge projects and drive innovation within the organization.,
Posted 4 days ago
10.0 - 14.0 years
0 Lacs
karnataka
On-site
As a Senior Staff Software Engineer in Data Lake House Engineering, you will play a crucial role in designing and implementing the Data Lake house platform, supporting both Data Engineering and Data Lake house applications. Your responsibilities will include overseeing Data Engineering pipeline productionalization, end-to-end data pipelines, model development, deployment, monitoring, refresh, etc. Additionally, you will be involved in driving technology development and architecture to ensure the platforms, systems, tools, models, and services meet the technical standards for security, quality, reliability, usability, scalability, performance, efficiency, and operability to meet the evolving needs of Wex and its customers. It is essential to balance both near-term and long-term requirements in collaboration with other teams across the organization. Your technical ownership will extend to Wex's Data Lake House Data architecture and service technology implementations, emphasizing architecture, technical direction, engineering best practices, and quality/compliance. Collaboration with Platform engineering and Data Lake House Engineering teams will be a key aspect of your role. The vision behind Wex's Data Lake House revolves around creating a unified, scalable, and intelligent data infrastructure that enables the organization to leverage its data effectively. This includes goals such as data democratization, agility and scalability, and advanced insights and innovation through Data & AI technology. We are seeking a highly motivated and experienced Software Engineer to join our organization and contribute to building out the Data Lake House Platform for Wex. Reporting to the Sr. Manager of Data Lake House Engineering in Bangalore, the ideal candidate will possess deep technical expertise in building and scaling data lake house environments, coupled with strong leadership and communication skills to align efforts across the organization. Your impact will be significant as you lead and drive the development of technology and platform for the company's Data Lake house requirements, ensuring functional richness, reliability, performance, and flexibility of the Data Lake house Platform. You will be instrumental in designing the architecture, leading the implementation of the Data Lake house System and services, and challenging the status quo to drive technical solutions that effectively serve the broad risk area of Wex. Collaboration with various engineering teams, information security teams, and external partners will be essential to ensure the security, privacy, and integration of the Data Lake Platform. Moreover, you will be responsible for creating, prioritizing, managing, and executing roadmaps and project plans, as well as reporting on the status of development, quality, operations, and system performance. Your role will involve driving the technical vision and strategy of Data Lake to meet business needs, setting high standards for your team, providing technical guidance and mentorship, and fostering an environment of continuous learning and innovation. Upholding strong engineering principles and ensuring a culture of transparency and inclusion will be integral to your leadership. To be successful in this role, you should bring at least 10 years of software design and development experience at a large scale and have strong software development skills in your chosen programming language. Experience with Data Lakehouse formats, Spark programming, cloud architecture tools and services, CI/CD automation, and agile development practices will be advantageous. Additionally, you should possess excellent analytical skills, mentorship capabilities, and strong written and verbal communication skills. In terms of personal characteristics, you should demonstrate a collaborative, mission-driven style, high standards of integrity and corporate stewardship, and the ability to operate in a fast-paced entrepreneurial environment. Leading with empathy, fostering a culture of trust and transparency, and communicating effectively in various settings will be key to your success. You should also exhibit talent development and scouting abilities, intellectual curiosity, learning agility, and the capacity to drive change through influence and stakeholder management across a complex business environment.,
Posted 1 week ago
6.0 - 10.0 years
35 - 37 Lacs
Pune
Work from Office
Expertise in Java/Python, Scala & SPARK Architecture Ability to Comprehend the Business requirement & translate to the Technical requirements Familiar with development of life cycle including CI/CD pipelines Familiarity working with agile methodology Required Candidate profile Experience with big-data technologies Spark/Databricks and Hadoop/ADLS is a must Experience in any one of the cloud platform Azure (Preferred), AWS or Google
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
chennai, tamil nadu
On-site
The Applications Development Senior Programmer Analyst position entails participating in the establishment and implementation of new or revised application systems and programs in collaboration with the Technology team. Your main objective in this role is to contribute to applications systems analysis and programming activities. Responsibilities include conducting tasks related to feasibility studies, time and cost estimates, IT planning, risk technology, applications development, model development, and establishing and implementing new or revised applications systems and programs to meet specific business needs or user areas. You will be responsible for monitoring and controlling all phases of the development process, including analysis, design, construction, testing, and implementation. Providing user and operational support on applications to business users is also a key aspect of your role. You will utilize in-depth specialty knowledge of applications development to analyze complex problems/issues, evaluate business processes, system processes, and industry standards, and make evaluative judgments. Additionally, you will recommend and develop security measures in post-implementation analysis of business usage to ensure successful system design and functionality. Consulting with users/clients and other technology groups on issues, recommending advanced programming solutions, and installing and assisting customer exposure systems are also part of your responsibilities. Ensuring that essential procedures are followed, helping define operating standards and processes, and serving as an advisor or coach to new or lower-level analysts are essential tasks in this role. You will be expected to operate with a limited level of direct supervision, exercise independence of judgment and autonomy, and act as a subject matter expert to senior stakeholders and/or other team members. Qualifications for this position include: - 8 to 12 years of Application development experience with Java / J2EE technologies. - Experience with Core Java/J2EE Application with complete command over OOPs and Design Patterns. - Proficiency in Data Structures and Algorithms. - Thorough knowledge and hands-on experience with technologies such as BIG data Hadoop knowledge with experience on Hive or Java-based Spark Programming. - Implementation or part of complex project execution in Big Data Spark ecosystem. - Working in an agile environment following best practices of agile Scrum. - Expertise in designing and optimizing software solutions for performance and stability. - Strong troubleshooting and problem-solving skills. - Experience in Test-driven development. Education required for this role: - Bachelors degree/University degree or equivalent experience This is a full-time position in the Technology Job Family Group, specifically within the Applications Development Job Family.,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
pune, maharashtra
On-site
The Data Engineer role involves building Data Engineering Solutions using cutting-edge data techniques. You will collaborate with product owners, customers, and technologists to deliver data products/solutions in an agile environment. Responsibilities include designing and developing big data solutions, partnering with domain experts, product managers, analysts, and data scientists. You will work on PySpark and Python, build Client pipelines from various sources, ensure automation through CI/CD, define needs for data platform maintainability, testability, performance, security, quality, and usability. Additionally, you will drive implementation of consistent patterns, reusable components, and coding standards for data engineering processes. Converting Talend pipelines into PySpark and Python, tuning Big data applications for optimal performance, evaluating new IT developments, and recommending system enhancements are also part of the role. You should have 4-8 years of IT experience with at least 4 years in PySpark and Python. Experience in designing and developing Data Pipelines, Spark programming, machine learning libraries, containerization technologies, DevOps, and team management is required. Knowledge of Oracle performance tuning, SQL, Autosys, and Unix scripting is also beneficial. The ideal candidate holds a Bachelor's degree or equivalent experience. Please note that this job description provides a summary of the work performed, and additional job-related duties may be assigned as needed.,
Posted 1 week ago
5.0 - 8.0 years
3 - 7 Lacs
Hyderabad
Work from Office
Long Description Experienceand Expertise inany of the followingLanguagesat least 1 of them : Java, Scala, Python Experienceand expertise in SPARKArchitecture Experience in the range of 6-10 yrs plus Good Problem SolvingandAnalytical Skills Ability to Comprehend the Business requirementand translate to the Technical requirements Good communicationand collaborative skills with fellow teamandacross Vendors Familiar with development of life cycle includingCI/CD pipelines. Proven experienceand interested in supportingexistingstrategicapplications Familiarity workingwithagile methodology Mandatory Skills: Scala programming.: Experience: 5-8 Years.
Posted 2 weeks ago
7.0 - 12.0 years
9 - 12 Lacs
Bengaluru
Work from Office
Responsibilities: * Design, develop, test & maintain Scala applications using Spark. * Collaborate with cross-functional teams on project delivery. * Optimize application performance through data analysis.
Posted 2 weeks ago
8.0 - 12.0 years
0 Lacs
chennai, tamil nadu
On-site
The Applications Development Senior Programmer Analyst position involves participating in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. Your role will contribute to applications systems analysis and programming activities. Responsibilities: - Conduct feasibility studies, time and cost estimates, IT planning, risk technology, applications development, and establish new or revised applications systems and programs to meet specific business needs - Monitor and control all phases of the development process including analysis, design, construction, testing, and implementation, providing user and operational support to business users - Utilize specialty knowledge of applications development to analyze complex problems, evaluate business and system processes, and industry standards - Recommend and develop security measures post-implementation to ensure successful system design and functionality - Consult with users/clients and technology groups, recommend advanced programming solutions, and assist with customer exposure systems - Ensure adherence to essential procedures and help define operating standards and processes - Serve as an advisor or coach to new or lower-level analysts - Operate with a limited level of direct supervision, exercise independence of judgment and autonomy - Act as a subject matter expert to senior stakeholders and team members - Assess risk in business decisions with consideration for the firm's reputation and compliance with laws and regulations Qualifications: - 8 to 12 years of Application development experience with Java / J2EE technologies - Proficiency in Core Java/J2EE Application with expertise in OOPs and Design Patterns - Strong knowledge of Data Structures and Algorithms - Experience with BIG data Hadoop or Java-based Spark Programming - Proficiency in designing and optimizing software solutions for performance and stability - Troubleshooting, problem-solving, and Test-driven development expertise - Bachelor's degree or equivalent experience If you are a person with a disability and need a reasonable accommodation to use our search tools or apply for a career opportunity, please review Accessibility at Citi.,
Posted 2 weeks ago
5.0 - 8.0 years
7 - 10 Lacs
Noida, India
Work from Office
Key Responsibilities: 1.Architect and design end to end data pipelines starting from Source systems to Data warehouse. 2.Lead the development of scalable Python- Spark based data processing workflows 3.Define and implement data modeling standards for DWH including fact/dimension schema and historical handling. 4.Oversee performance tuning of Python, Spark and ETL loads. 5.Ensure robust data integration with Tableau reporting by designing data structures optimized for Bl consumption. 6.Mentor junior engineers and drive engineering best practices. 7.Work loosely with business stakeholders, developers and product teams to align data initiatives with business goals, 8.Define SLAs, error handling, logging, monitoring and alerting mechanisms across pipelines. Must Have: 1. Strong Oracle SQL expertise and deep oracle DWH experience. 2. Proficiency in Python and Spark with experience handling large scale data transformations. 3. Experience in building batch data pipelines and managing dependencies. 4. Solid understanding of data warehousing principles and dimensional modeling. 5. Experience working with reporting tools like Tableau. 6. Good to have experience in cloud-based DWHs (like Snowflake) for future- readiness. Mandatory Competencies ETL - ETL - Data Stage Beh - Communication and collaboration BI and Reporting Tools - BI and Reporting Tools - Tableau QA/QE - QA Analytics - Data Analysis Database - Database Programming - SQL Big Data - Big Data - SPARK Programming Language - Python - Python Shell ETL - ETL - Ab Initio
Posted 2 weeks ago
5.0 - 9.0 years
9 - 18 Lacs
Hyderabad, Pune, Chennai
Work from Office
Spark Developer Positions are open for PAN INDIA LOCATIONS Share Profiles to afreen.banu@in.experis.com Along with Alternate Mail id's And Contact Numbers.
Posted 2 weeks ago
12.0 - 14.0 years
5 - 10 Lacs
Bengaluru, Karnataka, India
On-site
Please Note - High NP will not considered (Only Immediate joiners) Skilled and motivated Azure Databricks Data Engineer to join our dynamic team. The ideal candidate will have strong experience with Python, Spark programming, and expertise inbuilding and optimizing data pipelines in Azure Databricks. You will play a pivotal role in leveraging Databricks workflows, Databricks Asset Bundles, and CI/CD pipelines using GitHub to deliver high-performance data solutions. A solid understanding of Data Warehousing and Data Mart architecture in Databricks is critical for success in this role. If youre passionate about data engineering, cloud technologies, and scalable data architecture.
Posted 3 weeks ago
12.0 - 14.0 years
5 - 10 Lacs
Chennai, Tamil Nadu, India
On-site
Please Note - High NP will not considered (Only Immediate joiners) Skilled and motivated Azure Databricks Data Engineer to join our dynamic team. The ideal candidate will have strong experience with Python, Spark programming, and expertise inbuilding and optimizing data pipelines in Azure Databricks. You will play a pivotal role in leveraging Databricks workflows, Databricks Asset Bundles, and CI/CD pipelines using GitHub to deliver high-performance data solutions. A solid understanding of Data Warehousing and Data Mart architecture in Databricks is critical for success in this role. If youre passionate about data engineering, cloud technologies, and scalable data architecture.
Posted 3 weeks ago
12.0 - 14.0 years
5 - 10 Lacs
Hyderabad, Telangana, India
On-site
Please Note - High NP will not considered (Only Immediate joiners) Skilled and motivated Azure Databricks Data Engineer to join our dynamic team. The ideal candidate will have strong experience with Python, Spark programming, and expertise inbuilding and optimizing data pipelines in Azure Databricks. You will play a pivotal role in leveraging Databricks workflows, Databricks Asset Bundles, and CI/CD pipelines using GitHub to deliver high-performance data solutions. A solid understanding of Data Warehousing and Data Mart architecture in Databricks is critical for success in this role. If youre passionate about data engineering, cloud technologies, and scalable data architecture.
Posted 3 weeks ago
12.0 - 14.0 years
5 - 10 Lacs
Chennai, Tamil Nadu, India
On-site
Please Note - High NP will not considered (Only Immediate joiners) Skilled and motivated Azure Databricks Data Engineer to join our dynamic team. The ideal candidate will have strong experience with Python, Spark programming, and expertise inbuilding and optimizing data pipelines in Azure Databricks. You will play a pivotal role in leveraging Databricks workflows, Databricks Asset Bundles, and CI/CD pipelines using GitHub to deliver high-performance data solutions. A solid understanding of Data Warehousing and Data Mart architecture in Databricks is critical for success in this role. If youre passionate about data engineering, cloud technologies, and scalable data architecture.
Posted 3 weeks ago
12.0 - 14.0 years
5 - 10 Lacs
Hyderabad, Telangana, India
On-site
Please Note - High NP will not considered (Only Immediate joiners) Skilled and motivated Azure Databricks Data Engineer to join our dynamic team. The ideal candidate will have strong experience with Python, Spark programming, and expertise inbuilding and optimizing data pipelines in Azure Databricks. You will play a pivotal role in leveraging Databricks workflows, Databricks Asset Bundles, and CI/CD pipelines using GitHub to deliver high-performance data solutions. A solid understanding of Data Warehousing and Data Mart architecture in Databricks is critical for success in this role. If youre passionate about data engineering, cloud technologies, and scalable data architecture.
Posted 3 weeks ago
5.0 - 10.0 years
10 - 20 Lacs
Noida, Hyderabad, Greater Noida
Work from Office
Streaming data Technical skills requirements :- Experience- 5+ Years Solid hands-on and Solution Architecting experience in Big-Data Technologies (AWS preferred) - Hands on experience in: AWS Dynamo DB, EKS, Kafka, Kinesis, Glue, EMR - Hands-on experience of programming language like Scala with Spark. - Good command and working experience on Hadoop Map Reduce, HDFS, Hive, HBase, and/or No-SQL Databases - Hands on working experience on any of the data engineering analytics platform (Hortonworks Cloudera MapR AWS), AWS preferred - Hands-on experience on Data Ingestion Apache Nifi, Apache Airflow, Sqoop, and Oozie - Hands on working experience of data processing at scale with event driven systems, message queues (Kafka FlinkSpark Streaming) - Hands on working Experience with AWS Services like EMR, Kinesis, S3, CloudFormation, Glue, API Gateway, Lake Foundation - Hands on working Experience with AWS Athena - Experience building data pipelines for structured unstructured, real-time batch, events synchronous asynchronous using MQ, Kafka, Steam processing. Mandatory Skills- Spark, Scala, AWS, Hadoop
Posted 3 weeks ago
5.0 - 8.0 years
20 - 35 Lacs
Pune, Chennai, Bengaluru
Hybrid
Greetings from LTIMindtree!! About the job Are you looking for a new career challenge? With LTIMindtree, are you ready to embark on a data-driven career? Working for global leading manufacturing client for providing an engaging product experience through best-in-class PIM implementation and building rich, relevant, and trusted product information across channels and digital touchpoints so their end customers can make an informed purchase decision will surely be a fulfilling experience. Location: Pan India . Key Skill : Hadoop-Spark SparkSQL – Java Interested candidates kindly apply in below link and share updated cv to Hemalatha1@ltimindtree.com https://forms.office.com/r/zQucNTxa2U Skills needed: 1. Hand-on Experience on Java and Big data Technology including Spark. Hive, Impala 2. Experience with Streaming Framework such as Kafka 3. Hands-on Experience with Object Storage. Should be able to develop data Archival and retrieval patters 4. Good to have experience of any Public platform like AWS, Azure, GCP etc. 5. Ready to upskill as and when needed on project technologies viz Abinitio Why join us? Work in industry leading implementations for Tier-1 clients Accelerated career growth and global exposure Collaborative, inclusive work environment rooted in innovation Exposure to best-in-class automation framework Innovation first culture: We embrace automation, AI insights and clean data Know someone who fits this perfectly? Tag them – let’s connect the right talent with right opportunity DM or email to know more Let’s build something great together
Posted 3 weeks ago
5.0 - 9.0 years
9 - 18 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Spark Developer Share Profiles to afreen.banu@in.experis.com
Posted 4 weeks ago
5.0 - 8.0 years
20 - 35 Lacs
Pune, Chennai, Bengaluru
Hybrid
Greetings from LTIMindtree!! About the job Are you looking for a new career challenge? With LTIMindtree, are you ready to embark on a data-driven career? Working for global leading manufacturing client for providing an engaging product experience through best-in-class PIM implementation and building rich, relevant, and trusted product information across channels and digital touchpoints so their end customers can make an informed purchase decision will surely be a fulfilling experience. F2F Drive on 28-Jun-25 at Pune & Mumbai!! Key Skill : Hadoop-Spark SparkSQL Scala Interested candidates kindly apply in below link and share updated cv to Hemalatha1@ltimindtree.com https://forms.office.com/r/zQucNTxa2U Job Description Experience in Scala programming languages Experience in Big Data technologies including Spark Scala and Kafka Who have a good understanding of organizational strategy architecture patterns Microservices Event Driven and technology choices and coaching the team in execution in alignment to these guidelines.Who can apply organizational technology patterns effectively in projects and make recommendations on alternate options.Who have hands on experience working with large volumes of data including different patterns of data ingestion processing batch realtime movement storage and access for both internal and external to BU and ability to make independent decisions within scope of project Who have a good understanding of data structures and algorithms Who can test debug and fix issues within established SLAs Who can design software that is easily testable and observable Who understand how teams goals fit a business need Who can identify business problems at the project level and provide solutions Who understand data access patterns streaming technology data validation data performance cost optimization Strong SQL skills Why join us? Work in industry leading implementations for Tier-1 clients Accelerated career growth and global exposure Collaborative, inclusive work environment rooted in innovation Exposure to best-in-class automation framework Innovation first culture: We embrace automation, AI insights and clean data Know someone who fits this perfectly? Tag them lets connect the right talent with right opportunity DM or email to know more Lets build something great together
Posted 1 month ago
1.0 - 3.0 years
6 - 9 Lacs
Pune, Gurugram, Bengaluru
Hybrid
POSITION Senior Data Engineer / Data Engineer Bangalore/Mumbai/Kolkata/Gurugra m/Hyd/Pune/Chennai LOCATION EXPERIENCE 2+ Years ABOUT HASHEDIN We are software engineers who solve business problems with a Product Mindset for leading global organizations. By combining engineering talent with business insight, we build software and products that can create new enterprise value. The secret to our success is a fast-paced learning environment, an extreme ownership spirit, and a fun culture. WHY SHOULD YOU JOIN US? With the agility of a start-up and the opportunities of an enterprise, every day at HashedIn, your work will make an impact that matters. So, if you are a problem solver looking to thrive in a dynamic fun culture of inclusion, collaboration, and high performance HashedIn is the place to be! From learning to leadership, this is your chance to take your software engineering career to the next level. So, what impact will you make? Visit us @ https://hashedin.com JOB TITLE: Senior Data Engineer / Data Engineer OVERVIEW OF THE ROLE: As a Data Engineer or Senior Data Engineer, you will be hands-on in architecting, building, and optimizing robust, efficient, and secure data pipelines and platforms that power business- critical analytics and applications. You will play a central role in the implementation and automation of scalable batch and streaming data workflows using modern big data and cloud technologies. Working within cross-functional teams, you will deliver well-engineered, high- quality code and data models, and drive best practices for data reliability, lineage, quality, and security. HASHEDIN BY DELOITTE 2025 Mandatory Skills: ¢ ¢ ¢ Hands-on software coding or scripting for minimum 3 years Experience in product management for at-least 2 years Stakeholder management experience for at-least 3 years Experience in one amongst GCP, AWS or Azure cloud platform Key Responsibilities: ¢ Design, build, and optimize scalable data pipelines and ETL/ELT workflows using Spark (Scala/Python), SQL, and orchestration tools (e.g., Apache Airflow, Prefect, Luigi). ¢ Implement efficient solutions for high-volume, batch, real-time streaming, and event- driven data processing, leveraging best-in-class patterns and frameworks. Build and maintain data warehouse and lakehouse architectures (e.g., Snowflake, Databricks, Delta Lake, BigQuery, Redshift) to support analytics, data science, and BI workloads. Develop, automate, and monitor Airflow DAGs/jobs on cloud or Kubernetes, following robust deployment and operational practices (CI/CD, containerization, infra-as-code). Write performant, production-grade SQL for complex data aggregation, transformation, and analytics tasks. Ensure data quality, consistency, and governance across the stack, implementing processes for validation, cleansing, anomaly detection, and reconciliation. Collaborate with Data Scientists, Analysts, and DevOps engineers to ingest, structure, and expose structured, semi-structured, and unstructured data for diverse use-cases. Contribute to data modeling, schema design, data partitioning strategies, and ensure adherence to best practices for performance and cost optimization. Implement, document, and extend data lineage, cataloging, and ¢ ¢ ¢ ¢ ¢ ¢ ¢ observability through tools such as AWS Glue, Azure Purview, Amundsen, or open-source technologies. ¢ Apply and enforce data security, privacy, and compliance requirements (e.g., access control, data masking, retention policies, GDPR/CCPA). Take ownership of end-to-end data pipeline lifecycle: design, development, code reviews, testing, deployment, operational monitoring, and maintenance/troubleshooting. Contribute to frameworks, reusable modules, and automation to improve development efficiency and maintainability of the codebase. Stay abreast of industry trends and emerging technologies, participating in code ¢ ¢ ¢ reviews, technical discussions, and peer mentoring as needed. Skills & Experience: ¢ Proficiency with Spark (Python or Scala), SQL, and data pipeline orchestration (Airflow, Prefect, Luigi, or similar). Experience with cloud data ecosystems (AWS, GCP, Azure) and cloud-native services ¢ for data processing (Glue, Dataflow, Dataproc, EMR, HDInsight, Synapse, etc.). © HASHEDIN BY DELOITTE 2025 ¢ Hands-on development skills in at least one programming language (Python, Scala, or Java preferred); solid knowledge of software engineering best practices (version control, testing, modularity). ¢ Deep understanding of batch and streaming architectures (Kafka, Kinesis, Pub/Sub, Flink, Structured Streaming, Spark Streaming). Expertise in data warehouse/lakehouse solutions (Snowflake, Databricks, Delta Lake, BigQuery, Redshift, Synapse) and storage formats (Parquet, ORC, Delta, Iceberg, Avro). ¢ ¢ Strong SQL development skills for ETL, analytics, and performance optimization. ¢ Familiarity with Kubernetes (K8s), containerization (Docker), and deploying data pipelines in distributed/cloud-native environments. Experience with data quality frameworks (Great Expectations, Deequ, or custom validation), monitoring/observability tools, and automated testing. Working knowledge of data modeling (star/snowflake, normalized, denormalized) and metadata/catalog management. Understanding of data security, privacy, and regulatory compliance (access management, PII masking, auditing, GDPR/CCPA/HIPAA). Familiarity with BI or visualization tools (PowerBI, Tableau, Looker, etc.) is an advantage but not core. ¢ ¢ ¢ ¢ ¢ Previous experience with data migrations, modernization, or refactoring legacy ETL processes to modern cloud architectures is a strong plus. ¢ Bonus: Exposure to open-source data tools (dbt, Delta Lake, Apache Iceberg, Amundsen, Great Expectations, etc.) and knowledge of DevOps/MLOps processes. Professional Attributes: ¢ ¢ ¢ ¢ Strong analytical and problem-solving skills; attention to detail and commitment to code quality and documentation. Ability to communicate technical designs and issues effectively with team members and stakeholders. Proven self-starter, fast learner, and collaborative team player who thrives in dynamic, fast-paced environments. Passion for mentoring, sharing knowledge, and raising the technical bar for data engineering practices. Desirable Experience: ¢ ¢ ¢ ¢ Contributions to open source data engineering/tools communities. Implementing data cataloging, stewardship, and data democratization initiatives. Hands-on work with DataOps/DevOps pipelines for code and data. Knowledge of ML pipeline integration (feature stores, model serving, lineage/monitoring integration) is beneficial. © HASHEDIN BY DELOITTE 2025 EDUCATIONAL QUALIFICATIONS: ¢ ¢ ¢ ¢ Bachelors or Masters degree in Computer Science, Data Engineering, Information Systems, or related field (or equivalent experience). Certifications in cloud platforms (AWS, GCP, Azure) and/or data engineering (AWS Data Analytics, GCP Data Engineer, Databricks). Experience working in an Agile environment with exposure to CI/CD, Git, Jira, Confluence, and code review processes. Prior work in highly regulated or large-scale enterprise data environments (finance, healthcare, or similar) is a plus. © HASHEDIN BY DELOITTE 2025
Posted 1 month ago
6.0 - 10.0 years
30 - 35 Lacs
Bengaluru, Delhi / NCR, Mumbai (All Areas)
Hybrid
As a Data Engineering Lead, you will play a crucial role in overseeing the design, development, and maintenance of our organization's data architecture and infrastructure. You will be responsible for designing and developing the architecture for the data platform that ensures the efficient and effective processing of large volumes of data, enabling the business to make informed decisions based on reliable and high-quality data. The ideal candidate will have a strong background in data engineering, excellent leadership skills, and a proven track record of successfully managing complex data projects. Responsibilities: Data Architecture and Design: Design and implement scalable and efficient data architectures to support the organization's data processing needs Work closely with cross-functional teams to understand data requirements and ensure that data solutions align with business objectives ETL Development: Oversee the development of robust ETL processes to extract, transform, and load data from various sources into the data warehouse Ensure data quality and integrity throughout the ETL process, implementing best practices for data cleansing and validation Big Data Technologies: Stay abreast of emerging trends and technologies in big data and analytics, and assess their applicability to the organization's data strategy Implement and optimize big data technologies to process and analyze large datasets efficiently Cloud Integration: Collaborate with the IT infrastructure team to integrate data engineering solutions with cloud platforms, ensuring scalability, security, and performance Performance Monitoring and Optimization: Implement monitoring tools and processes to track the performance of data pipelines and proactively address any issues Optimize data processing workflows for improved efficiency and resource utilization Documentation: Maintain comprehensive documentation for data engineering processes, data models, and system architecture Ensure that team members follow documentation standards and best practices. Collaboration and Communication: Collaborate with data scientists, analysts, and other stakeholders to understand their data needs and deliver solutions that meet those requirements Communicate effectively with technical and non-technical stakeholders, providing updates on project status, challenges, and opportunities. Qualifications: Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. 6-8 years of professional experience in data engineering In-depth knowledge of data modeling, ETL processes, and data warehousing. In-depth knowledge of building the data warehouse using Snowflake Should have experience in data ingestion, data lakes, data mesh and data governance Must have experience in Python programming Strong understanding of big data technologies and frameworks, such as Hadoop, Spark, and Kafka. Experience with cloud platforms, such as AWS, Azure, or Google Cloud. Familiarity with database systems like SQL, NoSQL, and data pipeline orchestration tools. Excellent problem-solving and analytical skills. Strong communication and interpersonal skills. Proven ability to work collaboratively in a fast-paced, dynamic environment.
Posted 1 month ago
0.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Lead Consultant, AWS DataLake ! Responsibilities Having knowledge on DataLake on AWS services with exposure to creating External Tables and spark programming. The person shall be able to work on python programming. Writing effective and scalable Python codes for automations, data wrangling and ETL. Designing and implementing robust applications and work on Automations using python codes. Debugging applications to ensure low-latency and high-availability. Writing optimized custom SQL queries Experienced in team and client handling Having prowess in documentation related to systems, design, and delivery. Integrate user-facing elements into applications Having the knowledge of External Tables, Data Lake concepts. Able to do task allocation, collaborate on status exchanges and getting things to successful closure. Implement security and data protection solutions Must be capable of writing SQL queries for validating dashboard outputs Must be able to translate visual requirements into detailed technical specifications Well versed in handling Excel, CSV, text, json other unstructured file formats using python. Expertise in at least one popular Python framework (like Django, Flask or Pyramid) Good understanding and exposure on any Git, Bamboo, Confluence and Jira. Good in Dataframes and SQL ANSI using pandas. Team player, collaborative approach and excellent communication skills Qualifications we seek in you! Minimum Qualifications .?&emspBE/B Tech/ MCA .?&emspExcellent written and verbal communication skills .?&emspGood knowledge of Python, Pyspark Preferred Qualifications/ Skills Strong ETL knowledge on any ETL tool - good to have. Good to have knowledge on AWS cloud and Snowflake. Having knowledge of PySpark is a plus. Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 1 month ago
5.0 - 8.0 years
20 - 35 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Greetings from LTIMindtree!! About the job Are you looking for a new career challenge? With LTIMindtree, are you ready to embark on a data-driven career? Working for global leading manufacturing client for providing an engaging product experience through best-in-class PIM implementation and building rich, relevant, and trusted product information across channels and digital touchpoints so their end customers can make an informed purchase decision will surely be a fulfilling experience. Location: Pan India. Interested candidates kindly apply in below link and share updated cv to Hemalatha1@ltimindtree.com https://forms.office.com/r/JhYtz7Vzbn Job Description Key Skill : Cloudera, Spark,Hive,Scoop Jobs Mandatory Skills: Cloudera Administration - Hadoop, HIVE, IMPALA, SPARK, SQOOP. Maintaining/Creating JOBS and Migration, CI?CD Pipelines Monitoring and Performance Tuning. Why join us? Work in industry leading implementations for Tier-1 clients Accelerated career growth and global exposure Collaborative, inclusive work environment rooted in innovation Exposure to best-in-class automation framework Innovation first culture: We embrace automation, AI insights and clean data Know someone who fits this perfectly? Tag them lets connect the right talent with right opportunity DM or email to know more Lets build something great together
Posted 1 month ago
5.0 - 8.0 years
20 - 35 Lacs
Pune, Chennai, Bengaluru
Hybrid
Greetings from LTIMindtree!! About the job Are you looking for a new career challenge? With LTIMindtree, are you ready to embark on a data-driven career? Working for global leading manufacturing client for providing an engaging product experience through best-in-class PIM implementation and building rich, relevant, and trusted product information across channels and digital touchpoints so their end customers can make an informed purchase decision will surely be a fulfilling experience. Location: Pan India. Key Skill : Spark +Python Interested candidates kindly apply in below link and share updated cv to Hemalatha1@ltimindtree.com https://forms.office.com/r/zQucNTxa2U Job Description Key Skill: Hadoop-Spark SparkSQL Python Mandatory Skills: Relevant Experience in ETL and Data Engineering Strong Knowledge in Spark, Python Strong experience in Hive/SQL, PL/SQL Good Understanding of ETL & DW Concepts, Unix Scripting Design, implement and maintain Dat Pipeline to meet business requirements. Convert the Business need into Technical complex PySpark Code. Ability to write complex SQL queries for reporting purpose. Monitor Pyspark code performance and troubleshoot issues Why join us? Work in industry leading implementations for Tier-1 clients Accelerated career growth and global exposure Collaborative, inclusive work environment rooted in innovation Exposure to best-in-class automation framework Innovation first culture: We embrace automation, AI insights and clean data Know someone who fits this perfectly? Tag them let’s connect the right talent with right opportunity DM or email to know more Let’s build something great together
Posted 1 month ago
5.0 - 8.0 years
20 - 35 Lacs
Pune, Chennai, Bengaluru
Hybrid
Greetings from LTIMindtree!! About the job Are you looking for a new career challenge? With LTIMindtree, are you ready to embark on a data-driven career? Working for global leading manufacturing client for providing an engaging product experience through best-in-class PIM implementation and building rich, relevant, and trusted product information across channels and digital touchpoints so their end customers can make an informed purchase decision will surely be a fulfilling experience. Location: Pan India. Key Skill : Hadoop-Spark SparkSQL Scala Interested candidates kindly apply in below link and share updated cv to Hemalatha1@ltimindtree.com https://forms.office.com/r/zQucNTxa2U Job Description Experience in Scala programming languages Experience in Big Data technologies including Spark Scala and Kafka Who have a good understanding of organizational strategy architecture patterns Microservices Event Driven and technology choices and coaching the team in execution in alignment to these guidelines.Who can apply organizational technology patterns effectively in projects and make recommendations on alternate options.Who have hands on experience working with large volumes of data including different patterns of data ingestion processing batch realtime movement storage and access for both internal and external to BU and ability to make independent decisions within scope of project Who have a good understanding of data structures and algorithms Who can test debug and fix issues within established SLAs Who can design software that is easily testable and observable Who understand how teams goals fit a business need Who can identify business problems at the project level and provide solutions Who understand data access patterns streaming technology data validation data performance cost optimization Strong SQL skills Why join us? Work in industry leading implementations for Tier-1 clients Accelerated career growth and global exposure Collaborative, inclusive work environment rooted in innovation Exposure to best-in-class automation framework Innovation first culture: We embrace automation, AI insights and clean data Know someone who fits this perfectly? Tag them let’s connect the right talent with right opportunity DM or email to know more Let’s build something great together
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough