Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
3 - 8 years
11 - 16 Lacs
Pune
Work from Office
About The Role : Job TitleLead Engineer LocationPune, India Role Description Engineer is responsible for managing or performing work across multiple areas of the bank's overall IT Platform/Infrastructure including analysis, development, and administration. It may also involve taking functional oversight of engineering delivery for specific departments. Work includes: Planning and developing entire engineering solutions to accomplish business goals Building reliability and resiliency into solutions with appropriate testing and reviewing throughout the delivery lifecycle Ensuring maintainability and reusability of engineering solutions Ensuring solutions are well architected and can be integrated successfully into the end-to-end business process flow Reviewing engineering plans and quality to drive re-use and improve engineering capability Participating in industry forums to drive adoption of innovative technologies, tools and solutions in the Bank What we'll offer you: As part of our flexible scheme, here are just some of the benefits that you'll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your Key Responsibilities: The candidate is expected to; Hands-on engineering lead involved in analysis, design, design/code reviews, coding and release activities Champion engineering best practices and guide/mentor team to achieve high performance. Work closely with Business stakeholders, Tribe lead, Product Owner, Lead Architect to successfully deliver the business outcomes. Acquire functional knowledge of the business capability being digitized/re-engineered. Demonstrate ownership, inspire others, innovative thinking, growth mindset and collaborate for success. Your Skills & Experience: Minimum 15 years of IT industry experience in Full stack development Expert in Java, Spring Boot, NodeJS, SQL/PLSQL, ReactJS, Strong experience in Big data processing Apache Spark, Hadoop, Bigquery, DataProc, Dataflow etc Strong experience in Kubernetes, OpenShift container platform Experience with Databases Oracle, PostgreSQL, MongoDB, Redis/hazelcast, should understand data modeling, normalization, and performance optimization Experience in message queues (RabbitMQ/IBM MQ, JMS) and Data streaming i.e. Kafka, Pub-sub etc Experience of working on public cloud GCP preferred, AWS or Azure Knowledge of various distributed/multi-tiered architecture styles Micro-services, Data mesh, Integration patterns etc Experience on modern software product delivery practices, processes and tooling and BIzDevOps skills such as CI/CD pipelines using Jenkins, Git Actions etc Experience on designing solutions, based on DDD and implementing Clean / Hexagonal Architecture efficient systems that can handle large-scale operation Experience on leading teams and mentoring developers Focus on quality experience with TDD, BDD, Stress and Contract Tests Proficient in working with APIs (Application Programming Interfaces) and understand data formats like JSON, XML, YAML, Parquet etc Key Skills: Java Spring Boot NodeJS SQL/PLSQL ReactJS Advantageous: Having prior experience in Banking/Finance domain Having worked on hybrid cloud solutions preferably using GCP Having worked on product development How we'll support you: Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.
Posted 2 months ago
2 - 5 years
3 - 7 Lacs
Andhra Pradesh
Work from Office
Description Key Responsibilities Analyse raw data from various source files (e.g. EXCEL CSV JSON XML Parquet etc.) to identify formatting issues inconsistencies missing values and discrepancies. Use PySpark to transform clean and restructure source data to match the expected format for further processing and analysis. Apply data validation rules and data quality checks to correct issues related to data integrity including handling null values duplicate records and data type mismatches. Develop PySpark jobs to transform raw data into standardized formats (e.g. converting date formats normalizing text fields correcting encoding issues). Ensure that data transformations meet the required schema and business logic by utilizing PySpark Data Frame and SQL functionalities. Automate data transformation workflows and schedule data correction jobs to handle repetitive tasks efficiently. Build interactive notebooks and workflows to perform data transformations and analytics within the Azure Data Bricks platform. Develop and maintain data workflows in Azure Data Factory to orchestrate data movement and transformation across cloud-based storage and compute services. Implement scheduled and event-driven data pipeline orchestration using ADF with focus on data quality performance and scalability. Named Job Posting? (if Yes - needs to be approved by SCSC) Additional Details Global Grade C Level To Be Defined Named Job Posting? (if Yes - needs to be approved by SCSC) No Remote work possibility Yes Global Role Family 60236 (P) Software Engineering Local Role Name 6504 Developer / Software Engineer Local Skills 35611 Azure Databricks Languages RequiredEnglish Role Rarity To Be Defined
Posted 2 months ago
2 - 7 years
4 - 9 Lacs
Andhra Pradesh
Work from Office
JD -7+ years of hands on experience in Python especially dealing with Pandas and Numpy Good hands-on experience in Spark PySpark and Spark SQL Hands on experience in Databricks Unity Catalog Delta Lake Lake house Platform Medallion Architecture Azure Data Factory ADLS Experience in dealing with Parquet and JSON file format Knowledge in Snowflake.
Posted 2 months ago
5 - 7 years
15 - 21 Lacs
Bengaluru
Work from Office
Seeking a Data Engineer with expertise in PySpark, Databricks, and high-throughput data lake architectures. The ideal candidate should have experience in workflow automation, data governance, and CDC implementation, with a deep understanding of data lineage and quality. Roles and Responsibilities Roles and Responsibilities: Design & Build High-Throughput Data Lakes: Architect, develop, and optimize scalable and high-performance data lakes using PySpark and Databricks . Workflow Automation & Data Pipelines: Implement and manage Databricks Workflows, Autoloaders, and Delta Live Tables (DLTs) for seamless data processing and ingestion. Optimize Data Storage & Processing: Work with Parquet, Iceberg, and Hudi file formats, ensuring efficient partitioning, snapshotting, and compression strategies. Data Governance & Quality: Establish data lineage, data governance, and data quality frameworks using Unity Catalog and industry best practices. Change Data Capture (CDC) Implementation: Design and implement CDC solutions using Debezium, AWS DMS, and other relevant tools. Cloud Infrastructure & Security: Collaborate with cloud teams to deploy, monitor, and secure data infrastructure on AWS/GCP/Azure . Performance Optimization & Monitoring: Continuously optimize data pipelines, queries, and jobs for improved efficiency and cost-effectiveness. Collaboration & Knowledge Sharing: Work closely with engineering, analytics, and DevOps teams to ensure smooth data operations and knowledge transfer. Compliance & Best Practices: Ensure data security, compliance, and privacy are maintained across all data workflows.
Posted 2 months ago
7 - 12 years
25 - 35 Lacs
Kolkata
Hybrid
About the Role We are seeking a Senior Python/Data Engineer to design, develop, and optimize large-scale data pipelines, transformation workflows, and analytics-ready datasets . This role requires expertise in Python, Apache Airflow, Apache Spark, SQL, and DuckDB , along with strong experience in data quality, data processing, and automation . As a Senior Data Engineer , you will play a key role in building scalable, high-performance data engineering solutions , ensuring data integrity, and supporting real-time and batch data workflows . You will work closely with Data Scientists, Analysts, DevOps, and Engineering teams to build efficient, cost-effective, and reliable data architectures . Key Responsibilities Design, build, and maintain scalable ETL/ELT data pipelines using Apache Airflow, Spark, and SQL . Develop Python-based data engineering solutions to automate data ingestion, transformation, and validation. Implement data transformation and quality checks for structured and unstructured datasets. Work with DuckDB and other in-memory databases to enable fast exploratory data analysis (EDA). Optimize data storage and retrieval using Parquet, Apache Iceberg, and S3-based data lakes . Develop SQL-based analytics workflows and optimize performance for querying large datasets. Implement data lineage, governance, and metadata management for enterprise-scale data solutions. Ensure high availability, fault tolerance, and security of data pipelines. Collaborate with Data Science, AI/ML, and Business Intelligence teams to enable real-time and batch analytics . Work with cloud platforms ( AWS, Azure, GCP ) for data pipeline deployment and scaling. Write clean, efficient, and maintainable code following best software engineering practices. Required Skills & Qualifications 7+ years of experience in data engineering, big data processing, and backend development . Expertise in Python for data processing and automation. Strong experience with Apache Airflow for workflow orchestration. Hands-on experience with Apache Spark for big data transformations. Proficiency in SQL (PostgreSQL, DuckDB, Snowflake, etc.) for analytics and ETL workflows. Experience with data transformation, data validation, and quality assurance frameworks . Hands-on experience with DuckDB, Apache Arrow, or Vaex for in-memory data processing. Knowledge of data lake architectures (S3, Parquet, Iceberg) and cloud data storage. Familiarity with distributed computing, parallel processing, and optimized query execution . Experience working in CI/CD, DevOps, containerization (Docker, Kubernetes), and cloud environments . Strong problem-solving and debugging skills. Excellent written and verbal communication skills. Preferred Skills (Nice to Have) Experience programming in JAVA/JEE platform is highly desired. Experience with data streaming technologies (Kafka, Flink, Kinesis) . Familiarity with NoSQL databases (MongoDB, DynamoDB) . Exposure to AI/ML data pipelines and feature engineering . Knowledge of data security, compliance (SOC2 Type2, GDPR, HIPAA), and governance best practices . Experience in building metadata-driven data pipelines for self-service analytics.
Posted 2 months ago
8 - 13 years
10 - 15 Lacs
Chennai
Work from Office
Overall Responsibilities: Translate application storyboards and use cases into functional applications. Design, build, and maintain efficient, reusable, and reliable Java code. Ensure the best possible performance, quality, and responsiveness of applications. Identify bottlenecks and bugs, and devise solutions to these problems. Develop high-performance and low-latency components to run Spark clusters. Interpret functional requirements into design approaches that can be served through the Big Data platform. Collaborate and partner with global teams based across different locations. Propose best practices and standards; handover to operations. Perform testing of software prototypes and transfer to the operational team. Process data using Hive, Impala, and HBASE. Perform analysis of large data sets and derive insights. Technical Skills (Category-wise): Java Development: Solid understanding of object-oriented programming and design patterns. Strong Java experience with Java 1.8 or higher version. Strong core Java & multithreading working experience. Understanding of concurrency patterns & multithreading in Java. Proficient understanding of code versioning tools, such as Git. Familiarity with build tools such as Maven and continuous integration like Jenkins/Team City. Big Data Technologies: Experience in Big Data technologies like HDFS, Hive, HBASE, Apache Spark, and Kafka. Experience in building self-service platform-agnostic data access APIs. Service-oriented architecture, and data standards like JSON, Avro, Parquet. Experience in building advanced analytical models based on business context. Data Processing: Comfortable working with large data volumes and understanding logical data structures and analysis techniques. Processing data using Hive, Impala, and HBASE. Strong systems analysis, design, and architecture fundamentals, unit testing, and other SDLC activities. Application performance tuning, troubleshooting experience, and implementation of these skills in the Big Data domain. Additional Skills: Experience in working on Linux shell scripting. Experience in RDMS and NoSQL databases. Basic Unix OS and scripting knowledge. Optional:Familiarity with Arcadia Tool for Analytics. Optional:Familiarity with cloud and container technologies. Experience: 8+ years of relevant experience in Java and Big Data technologies. Day-to-Day Activities: Develop and maintain Java code for Big Data applications. Process and analyze large data sets using Big Data technologies. Collaborate with global teams to design and implement solutions. Perform testing and transfer software prototypes to the operational team. Troubleshoot and resolve performance issues and bugs. Ensure adherence to best practices and standards in development. Qualification: Bachelors or Masters degree in Computer Science, Information Technology, or a related field, or equivalent experience. Soft Skills: Excellent communication and collaboration abilities. Strong interpersonal and teamwork skills. Ability to work under pressure and meet tight deadlines. Positive attitude and strong work ethic. Commitment to continuous learning and professional development. S YNECHRONS DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative Same Difference is committed to fostering an inclusive culture promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicants gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law . Candidate Application Notice
Posted 3 months ago
8 - 13 years
18 - 30 Lacs
Noida
Hybrid
Role & responsibilities Duration: 6 Months Position Type: Further Extendable Contractual Role * This position is initially offered as a contractual role for a duration of 6 months, with the possibility of contract extension and with a potential for full-time conversion based on the performance, project requirement and the sole discretion of the company. Preferred candidate profile Technical Requirements for a PL/SQL Developer in Snowflake Database Development We need following technical skills and expertise: 1. PL/SQL Development: Strong experience in writing PL/SQL scripts, stored procedures, functions, packages, and triggers. Expertise in error handling, exception management, and performance tuning. Writing cursors, bulk collect, and table functions for large data processing. Optimizing complex SQL queries for efficiency in data retrieval. 2. Snowflake-Specific Development: Experience in migrating PL/SQL-based workloads to Snowflake. Developing stored procedures using Snowflake Scripting (since traditional PL/SQL is not natively supported). Implementing Snowflake UDFs (User-Defined Functions) and UDTFs (Table Functions). Working with Snowflake Tasks and Streams for real-time data processing. Managing Snowflake Virtual Warehouses to optimize performance and cost. 3. Database Design & Data Modelling: Experience in designing and implementing fact and dimension tables. Knowledge of Materialized Views and CTEs (Common Table Expressions). Working with semi-structured data formats (JSON, Parquet, Avro) in Snowflake. 4. Performance Optimization & Query Tuning: Tuning SQL queries using EXPLAIN plans, Clustering Keys, and Micro-Partitions. Understanding warehouse scaling and resource monitoring to optimize costs. Working with Result Cache, Query Cache, and Metadata Cache in Snowflake. Perks and benefits Flexibility in Work Environment Cost Savings Increased Productivity Better Work-Life Balance
Posted 3 months ago
9 - 14 years
1 - 1 Lacs
Chennai
Remote
Data Pipeline Design and Implementation Data Storage and Management Data Integration and Transformation Monitoring and Optimization. Need to Train candidates.
Posted 3 months ago
4 - 9 years
6 - 11 Lacs
Gurgaon
Work from Office
Capgemini Invent Capgemini Invent is the digital innovation, consulting and transformation brand of the Capgemini Group, a global business line that combines market leading expertise in strategy, technology, data science and creative design, to help CxOs envision and build whats next for their businesses. Your Role Proficiency in MS Fabric,Azure Data Factory, Azure Synapse Analytics, Azure Databricks Extensive knowledge of MS Fabriccomponents: Lakehouses, OneLake, Data Pipelines, Real-Time Analytics, Power BI Integration, Semantic Model. Integrate Fabric capabilities for seamless data flow, governance, and collaborationacross teams. Strong understanding of Delta Lake, Parquet, and distributed data systems. Strong programming skills in Python, PySpark,Scalaor SparkSQL/TSQLfor data transformations. Strong experience in implementation and management of lake House using Databricks and Azure Tech stack (ADLS Gen2, ADF, Azure SQL) . Proficiencyin data integration techniques, ETL processes and data pipeline architectures. Understanding of Machine Learning Algorithms & AI/ML frameworks (i.e TensorFlow, PyTorch)and Power BIis an added advantage Your Profile Overall 59 years of relevant experience in Azure data platform and AI/ML technologies. 4+ years in designing, solutioning in Fabric, Databricks, Pyspark and Azure Synapse MS Fabric and PySpark is must. What youll love about working here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of 22.5 billion.
Posted 3 months ago
2 - 4 years
5 - 10 Lacs
Bengaluru
Work from Office
We need Azure data engineer: • Hand on experience on building and maintaining data pipeline using apache spark in Azure databricks • Hand on experience in Python • Hands on experience in MS SQL server experience, ETL- SSIS • Understanding of database design and data modelling • Has knowledge in handling different file types- CSV,parquet,JSON & Rest API • Experience in Azure services like Azure storage, Azure data factory, Azure databricks • Experience in handling very large datasets in the range of 15-20TB
Posted 3 months ago
5 - 7 years
14 - 16 Lacs
Pune, Bengaluru, Gurgaon
Work from Office
Job Title: Data/ML Platform Engineer Location: Gurgaon, Pune, Bangalore, Chennai, Bhopal, Jaipur, Hyderabad (Work from office) Notice Period: ImmediateiSource Services is hiring for one of their client for the position of Data/ML Platform Engineer. As a Data Engineer you will be relied on to independently develop and deliver high-quality features for our new ML Platform, refactor and translate our data products and finish various tasks to a high standard. Youll be part of the Data Foundation Team, which focuses on creating and maintaining the Data Platform for Marktplaats. 5 years of hands-on experience in using Python, Spark,Sql. Experienced in AWS Cloud usage and management. Experience with Databricks (Lakehouse, ML, Unity Catalog, MLflow). Experience using various ML models and frameworks such as XGBoost, Lightgbm, Torch. Experience with orchestrators such as Airflow and Kubeflow. Familiarity with containerization and orchestration technologies (e.g., Docker, Kubernetes). Fundamental understanding of Parquet, Delta Lake and other data file formats. Proficiency on an IaC tool such as Terraform, CDK or CloudFormation. Strong written and verbal English communication skill and proficient in communication with non-technical stakeholderst Location - Gurgaon, Pune, Bangalore, Chennai, Bhopal, Jaipur, Hyderabad (Work from office)
Posted 3 months ago
15 - 20 years
17 - 22 Lacs
Pune
Work from Office
Engineer is responsible for managing or performing work across multiple areas of the bank's overall IT Platform/Infrastructure including analysis, development, and administration. It may also involve taking functional oversight of engineering delivery for specific departments. Work includes: Planning and developing entire engineering solutions to accomplish business goals Building reliability and resiliency into solutions with appropriate testing and reviewing throughout the delivery lifecycle Ensuring maintainability and reusability of engineering solutions Ensuring solutions are well architected and can be integrated successfully into the end-to-end business process flow Reviewing engineering plans and quality to drive re-use and improve engineering capability Participating in industry forums to drive adoption of innovative technologies, tools and solutions in the Bank Your Key Responsibilities: The candidate is expected to; Hands-on engineering lead involved in analysis, design, design/code reviews, coding and release activities Champion engineering best practices and guide/mentor team to achieve high performance. Work closely with Business stakeholders, Tribe lead, Product Owner, Lead Architect to successfully deliver the business outcomes. Acquire functional knowledge of the business capability being digitized/re-engineered. Demonstrate ownership, inspire others, innovative thinking, growth mindset and collaborate for success. Your Skills & Experience: Minimum 15 years of IT industry experience in Full stack development Expert in Java, Spring Boot, NodeJS, SQL/PLSQL, ReactJS, Strong experience in Big data processing Apache Spark, Hadoop, Bigquery, DataProc, Dataflow etc Strong experience in Kubernetes, OpenShift container platform Experience with Databases Oracle, PostgreSQL, MongoDB, Redis/hazelcast, should understand data modeling, normalization, and performance optimization Experience in message queues (RabbitMQ/IBM MQ, JMS) and Data streaming i.e. Kafka, Pub-sub etc Experience of working on public cloud GCP preferred, AWS or Azure Knowledge of various distributed/multi-tiered architecture styles Micro-services, Data mesh, Integration patterns etc Experience on modern software product delivery practices, processes and tooling and BIzDevOps skills such as CI/CD pipelines using Jenkins, Git Actions etc Experience on designing solutions, based on DDD and implementing Clean Hexagonal Architecture efficient systems that can handle large-scale operation Experience on leading teams and mentoring developers Focus on quality experience with TDD, BDD, Stress and Contract Tests Proficient in working with APIs (Application Programming Interfaces) and understand data formats like JSON, XML, YAML, Parquet etc Key Skills: Java Spring Boot NodeJS SQL/PLSQL ReactJS Advantageous: Having prior experience in Banking/Finance domain Having worked on hybrid cloud solutions preferably using GCP Having worked on product development How we'll support you: Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs Our values define the working environment we strive to create diverse, supportive and welcoming of different views. We embrace a culture reflecting a variety of perspectives, insights and backgrounds to drive innovation. We build talented and diverse teams to drive business results and encourage our people to develop to their full potential. Talk to us about flexible work arrangements and other initiatives we offer. We promote good working relationships and encourage high standards of conduct and work performance. We welcome applications from talented people from all cultures, countries, races, genders, sexual orientations, disabilities, beliefs and generations and are committed to providing a working environment free from harassment, discrimination and retaliation. Visit to discover more about the culture of Deutsche Bank including Diversity, Equity & Inclusion, Leadership, Learning, Future of Work and more besides.
Posted 3 months ago
15 - 20 years
20 - 35 Lacs
Pune
Work from Office
Job Title: Lead Engineer (RYR#2025) Role Description Engineer is responsible for managing or performing work across multiple areas of the bank's overall IT Platform/Infrastructure including analysis, development, and administration. It may also involve taking functional oversight of engineering delivery for specific departments. Work includes: Planning and developing entire engineering solutions to accomplish business goals Building reliability and resiliency into solutions with appropriate testing and reviewing throughout the delivery lifecycle Ensuring maintainability and reusability of engineering solutions Ensuring solutions are well architected and can be integrated successfully into the end-to-end business process flow Reviewing engineering plans and quality to drive re-use and improve engineering capability Participating in industry forums to drive adoption of innovative technologies, tools and solutions in the Bank What we'll offer you: As part of our flexible scheme, here are just some of the benefits that you'll enjoy Your Key Responsibilities: The candidate is expected to; Hands-on engineering lead involved in analysis, design, design/code reviews, coding and release activities Champion engineering best practices and guide/mentor team to achieve high performance. Work closely with Business stakeholders, Tribe lead, Product Owner, Lead Architect to successfully deliver the business outcomes. Acquire functional knowledge of the business capability being digitized/re-engineered. Demonstrate ownership, inspire others, innovative thinking, growth mindset and collaborate for success. Your Skills & Experience: Minimum 15 years of IT industry experience in Full stack development Expert in Java, Spring Boot, NodeJS, SQL/PLSQL, ReactJS, Strong experience in Big data processing Apache Spark, Hadoop, Bigquery, DataProc, Dataflow etc Strong experience in Kubernetes, OpenShift container platform Experience with Databases Oracle, PostgreSQL, MongoDB, Redis/hazelcast, should understand data modeling, normalization, and performance optimization Experience in message queues (RabbitMQ/IBM MQ, JMS) and Data streaming i.e. Kafka, Pub-sub etc Experience of working on public cloud GCP preferred, AWS or Azure Knowledge of various distributed/multi-tiered architecture styles Micro-services, Data mesh, Integration patterns etc Experience on modern software product delivery practices, processes and tooling and BIzDevOps skills such as CI/CD pipelines using Jenkins, Git Actions etc Experience on designing solutions, based on DDD and implementing Clean / Hexagonal Architecture efficient systems that can handle large-scale operation Experience on leading teams and mentoring developers Focus on quality experience with TDD, BDD, Stress and Contract Tests Proficient in working with APIs (Application Programming Interfaces) and understand data formats like JSON, XML, YAML, Parquet etc Key Skills: Java Spring Boot NodeJS SQL/PLSQL ReactJS Advantageous: Having prior experience in Banking/Finance domain Having worked on hybrid cloud solutions preferably using GCP Having worked on product development How we'll support you: Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs
Posted 3 months ago
4 - 9 years
6 - 15 Lacs
Bengaluru
Work from Office
Job Purpose and Impact As a Data Engineer at Cargill you work across the full stack to design, develop and operate high performance and data centric solutions using our comprehensive and modern data capabilities and platforms. You will play a critical role in enabling analytical insights and process efficiencies for Cargills diverse and complex business environments. You will work in a small team that shares your passion for building innovative, resilient, and high quality solutions while sharing, learning and growing together. Key Accountabilities Collaborate with business stakeholders, product owners and across your team on product or solution designs. Develop robust, scalable and sustainable data products or solutions utilizing cloud based technologies. Provide moderately complex technical support through all phases of product or solution life cycle. Perform data analysis, handle data modeling and configure and develop data pipelines to move and optimize data assets. Build moderately complex prototypes to test new concepts and provide ideas on reusable frameworks, components and data products or solutions and help promote adoption of new technologies. Independently solve moderately complex issues with minimal supervision, while escalating more complex issues to appropriate staff. Other duties as assigned Qualifications MINIMUM QUALIFICATIONS Bachelors degree in a related field or equivalent experience Minimum of two years of related work experience Other minimum qualifications may apply PREFERRED QUALIFCATIONS Experience developing modern data architectures, including data warehouses, data lakes, data meshes, hubs and associated capabilities including ingestion, governance, modeling, observability and more. Experience with data collection and ingestion capabilities, including AWS Glue, Kafka Connect and others. Experience with data storage and management of large, heterogenous datasets, including formats, structures, and cataloging with such tools as Iceberg, Parquet, Avro, ORC, S3, HFDS, HIVE, Kudu or others. Experience with transformation and modeling tools, including SQL based transformation frameworks, orchestration and quality frameworks including dbt, Apache Nifi, Talend, AWS Glue, Airflow, Dagster, Great Expectations, Oozie and others Experience working in Big Data environments including tools such as Hadoop and Spark Experience working in Cloud Platforms including AWS, GCP or Azure Experience of streaming and stream integration or middleware platforms, tools, and architectures such as Kafka, Flink, JMS, or Kinesis. Strong programming knowledge of SQL, Python, R, Java, Scala or equivalent Proficiency in engineering tooling including docker, git, and container orchestration services Strong experience of working in devops models with demonstratable understanding of associated best practices for code management, continuous integration, and deployment strategies. Experience and knowledge of data governance considerations including quality, privacy, security associated implications for data product development and consumption
Posted 3 months ago
5 - 10 years
25 - 30 Lacs
Chennai, Pune, Delhi
Work from Office
Should possess expertise in data engineering tools and storage such as Azure Synapse, Azure Data Factory and Azure Data Lake. Experience implementing automated Synapse pipelines. Ability to develop ADF ETL pipelines for transforming and ingesting data into the data warehouse. Experience with migration of conventional ETL (SSIS) from on-premise to Azure (ADF) environment. 5+ years experience with relational database technologies (Azure SQL, Azure Synapse, SQL Server, MySQL or similar). Ability to debug data load issues, transformation/translation problems etc. Familiarity with Azure infrastructure and the ability to integrate with various services. Experience working with high volume data and large objects Experience working in DevOps environments integrated with GIT for version control and CI/CD pipeline. Ability to understand data transformation and translation requirements and which tools to leverage to get the job done. Strong understanding of various data formats such as CSV, XML, JSON, PARQUET etc. Working knowledge of data quality approaches and error handling techniques. Good understanding of data modelling for data warehouse and data marts. Strong verbal and written communication skills. Ability to learn, contribute and grow in a fast phased environment. Good to have: Knowledge of CI/CD pipelines, Python/ Spark scripting, Working knowledge of Azure Logic Apps.
Posted 3 months ago
3 - 5 years
5 - 7 Lacs
Bengaluru
Work from Office
Job Title DNA - Proximus Account Responsibilities Build robust, performing and high scalable, flexible data pipelines with a focus on time to market with quality.Responsibilities: Act as an active team member to ensure high code quality (unit testing, regression tests) delivered in time and within budget. Document the delivered code/solution Participate to the implementation of the releases following the change & release management processes Provide support to the operation team in case of major incidents for which engineering knowledge is required. Participate to effort estimations. Provide solutions (bug fixes) for problem mgt. Technical and Professional Requirements: You have experience with most of these technologies:HDFS, Ozone, Hive, Impala, Spark, Atlas, Ranger. Knowledge of GraphQL, Venafi (Certificate Mgt) and Collibra (Data Governance) is an asset. Experience in a telecommunication environment and real-time technologies with focus on high availability and high-volume processing is an advantage:o Kafkao Flinko Spark Streaming You master programming languages as Java & Python/PySpark as well as SQL and are proficient in UNIX scripting Data formats like JSON, Parquet, XML and REST API have no secrets for you You have experience with CI/CD (GitLab/GitHub, Jenkins, Ansible, Nexus) for automated build & test. Knowledge of the Azure DevOps toolset is an asset.As project is preparing a move to Azure, the above will change slightly in the course of 2025. However, most of our current technological landscape remains a solid foundation for a role as EDH Data Engineer Preferred Skills: Technology->Big Data - Data Processing->Spark Additional Responsibilities: Preferred:o You are fluent in English, both spoken and written.o You are familiar with agile (Scrum) development principleso Customer, solution and improvement mindedo Good communicator who values collaboration with others Good to have:o Experience with DevOps practiceso Experience in using GitHub Copiloto Experience in NoSQL databases (CouchBase)o Experience in PowerBI Educational Requirements Bachelor of Engineering Service Line Data & Analytics Unit * Location of posting is subject to business requirements CLICK TO PROCEED
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2