Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
12.0 - 20.0 years
35 - 50 Lacs
Bengaluru
Hybrid
Data Architect with Cloud Expert, Data Architecture, Data Integration & Data Engineering ETL/ELT - Talend, Informatica, Apache NiFi. Big Data - Hadoop, Spark Cloud platforms (AWS, Azure, GCP), Redshift, BigQuery Python, SQL, Scala,, GDPR, CCPA
Posted 1 month ago
5.0 - 10.0 years
20 - 25 Lacs
Pune
Work from Office
Job Purpose Vi is seeking an experienced Apache NiFi Developer to join our data engineering team. In this role, person will be responsible for designing, building, and managing data flows using Apache NiFi. The ideal candidate has a strong background in data integration and transformation, and experience with real-time data pipelines in enterprise environments. This role requires hands-on experience with NiFi and related data ingestion and processing technologies. Key Result Areas/Accountabilities Data Flow Design and Development : Create, configure, and manage data flows in Apache NiFi to support data integration from diverse sources, such as databases, cloud storage, and APIs. Data Transformation and Routing : Develop and implement data transformation, routing, and enrichment workflows within NiFi to ensure data consistency and quality. Data Ingestion and Processing : Set up and manage data ingestion pipelines that enable real-time and batch data processing, using NiFi processors for custom integrations and transformations. Monitoring and Optimization: M onitor, troubleshoot, and optimize NiFi workflows for performance, reliability, and scalability. Core Competencies, Knowledge, Experience Overall 6+ years of experience in managing database/NBI development part, minimum 2 years of experience in managing & integration Kafka layer. Strong hands-on experience with Apache Kafka (setup, configuration, and tuning) Experience with Kafka Streams and/or Kafka Connect for real-time data processing and integration, Proficiency in Kafka producer/consumer development (using Java , Scala , Python , or other Kafka-compatible languages). Familiarity with NoSQL (e.g., Cassandra , MongoDB ) and SQL databases. Solid understanding of message queuing , event-driven architecture , and pub/sub systems . Must have technical / professional qualifications Bachelors degree in Computer Science with 4+ years of experience with Apache NiFi for data integration and flow management. Background in real-time data processing and data pipeline management . Familiarity with cloud platforms (AWS, Azure, Google Cloud) and cloud-based storage solutions.
Posted 1 month ago
3.0 - 8.0 years
2 Lacs
Hyderabad
Work from Office
Key responsibilities: Understand the programs service catalog and document the list of tasks which has to be performed for each Lead the design, development, and maintenance of ETL processes to extract, transform, and load data from various sources into our data warehouse Implement best practices for data loading, ensuring optimal performance and data quality Utilize your expertise in IDMC to establish and maintain data governance, data quality, and metadata management processes Implement data controls to ensure compliance with data standards, security policies, and regulatory requirements Collaborate with data architects to design and implement scalable and efficient data architectures that support business intelligence and analytics requirements Work on data modeling and schema design to optimize database structures for ETL processes Identify and implement performance optimization strategies for ETL processes, ensuring timely and efficient data loading Troubleshoot and resolve issues related to data integration and performance bottlenecks Collaborate with cross-functional teams, including data scientists, business analysts, and other engineering teams, to understand data requirements and deliver effective solutions Provide guidance and mentorship to junior members of the data engineering team Create and maintain comprehensive documentation for ETL processes, data models, and data flows Ensure that documentation is kept up-to-date with any changes to data architecture or ETL workflows Use Jira for task tracking and project management Implement data quality checks and validation processes to ensure data integrity and reliability Maintain detailed documentation of data engineering processes and solutions Required Skills: Bachelor's degree in Computer Science, Engineering, or a related field Proven experience as a Senior ETL Data Engineer, with a focus on IDMC / IICS Strong proficiency in ETL tools and frameworks (e g , Informatica Cloud, Talend, Apache NiFi) Expertise in IDMC principles, including data governance, data quality, and metadata management Solid understanding of data warehousing concepts and practices Strong SQL skills and experience working with relational databases Excellent problem-solving and analytical skills Qualified candidates should APPLY NOW for immediate consideration! Please hit APPLY to provide the required information, and we will be back in touch as soon as possible Thank you! ABOUT INNOVA SOLUTIONS: Founded in 1998 and headquartered in Atlanta, Georgia, Innova Solutions employs approximately 50,000 professionals worldwide and reports an annual revenue approaching $3 Billion Through our global delivery centers across North America, Asia, and Europe, we deliver strategic technology and business transformation solutions to our clients, enabling them to operate as leaders within their fields Recent Recognitions: One of Largest IT Consulting Staffing firms in the USA Recognized as #4 by Staffing Industry Analysts (SIA 2022) ClearlyRated Client Diamond Award Winner (2020) One of the Largest Certified MBE Companies in the NMSDC Network (2022) Advanced Tier Services partner with AWS and Gold with MS
Posted 1 month ago
1.0 - 3.0 years
1 - 5 Lacs
Bengaluru
Remote
Seeking a PHP Developer with 1+ year experience in backend service design, implementation, and automation. Ability to integrate multiple systems and optimize technical workflows. Also, support the engineering team in enhancing DevOps tools and internal development processes. Designation: PHP Developer (6 months direct contract with Cimpress) Notice period: Preferred immediate to 30 days notice period. Requirements: 1. 1+ year of experience as a PHP Developer. 2. Solid understanding of PHP frameworks (Symfony preferred). 3. Experience with workflow automation tools (e.g., Apache NiFi). 4. Familiarity with modern DevOps practices and toolchains. 5. Experience with RESTful API development and integration. 6. Proficiency with MySQL or other relational databases. 7. Experience working in remote teams with asynchronous communication. Responsibilities: 1. Write clean, fast, and scalable PHP code. 2. Contribute to designing, developing, and integrating internal tooling for DevOps. 3. Follow industry best practices and contribute to documentation. 4. Design and implement solutions to improve development workflows. 5. Support tool adoption across teams by building reliable components.Develop high-quality, efficient, and scalable PHP code. Nice to Have: 1. Experience with Apache NiFi or similar workflow automation tools. 2. Familiarity with modern DevOps toolchains. Remote First-Culture: In 2020, Cimpress adopted a Remote-First operating model and culture. We heard from our team members that having the freedom, autonomy and trust in each other to work from home and, the ability to operate when they are most productive, empowers everyone to be their best and most brilliant self. Cimpress also provides collaboration spaces for team members to work physically together when it's safe to do so or believe in office working will deliver the best results. Currently we are enabled to hire remote team members in over 20 US States as well as several countries in Europe: Spain, Germany, UK, Czech Republic, the Netherlands and Switzerland. About Us: Led by founder and CEO Robert Keane, Cimpress invests in and helps build customer-focused, entrepreneurial mass customization businesses. Through the personalized physical (and digital) products these companies create, we empower over 17 million global customers to make an impression. Last year, Cimpress generated $3.5B in revenue through customized print products, signage, apparel, packaging and more. The Cimpress family includes a dynamic, international group of businesses and central teams, all working to solve problems, build businesses, innovate and improve.
Posted 1 month ago
6.0 - 11.0 years
10 - 14 Lacs
Hyderabad, Gurugram
Work from Office
About the Role: Grade Level (for internal use): 10 Position Title Senior Software Developer The Team Do you love to collaborate & provide solutionsThis team comes together across eight different locations every single day to craft enterprise grade applications that serve a large customer base with growing demand and usage. You will use a wide range of technologies and cultivate a collaborative environment with other internal teams. The Impact We focus primarily developing, enhancing and delivering required pieces of information & functionality to internal & external clients in all client-facing applications. You will have a highly visible role where even small changes have very wide impact. Whats in it for you - Opportunities for innovation and learning new state of the art technologies - To work in pure agile & scrum methodology Responsibilities : Design, and implement software-related projects. Perform analyses and articulate solutions. Design underlying engineering for use in multiple product offerings supporting a large volume of end-users. Develop project plans with task breakdowns and estimates. Manage and improve existing solutions. Solve a variety of complex problems and figure out possible solutions, weighing the costs and benefits. What were Looking For : Basic Qualifications : Bachelor's degree in Computer Science or Equivalent 6+ years related experience Passionate, smart, and articulate developer Strong C#, WPF and SQL skills Experience implementingWeb Services (with WCF, RESTful JSON, SOAP, TCP), Windows Services, and Unit Tests Dependency Injection Able to demonstrate strong OOP skills Able to work well individually and with a team Strong problem-solving skills Good work ethic, self-starter, and results-oriented Interest and experience in Environmental and Sustainability content is a plus Agile/Scrum experience a plus Exposure to Data Engineering & Big Data technologies like Hadoop, Spark/Scala, Nifi & ETL is a plus Preferred Qualifications : Experience on Docker is a plus Experience working in cloud computing environments such as AWS, Azure or GCP Experience with large scale messaging systems such as Kafka or RabbitMQ or commercial systems. Whats In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technologythe right combination can unlock possibility and change the world.Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you cantake care of business. We care about our people. Thats why we provide everything youand your careerneed to thrive at S&P Global. Health & WellnessHealth care coverage designed for the mind and body. Continuous LearningAccess a wealth of resources to grow your career and learn valuable new skills. Invest in Your FutureSecure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly PerksIts not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the BasicsFrom retail discounts to referral incentive awardssmall perks can make a big difference. For more information on benefits by country visithttps://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected andengaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group), SWP Priority Ratings - (Strategic Workforce Planning)
Posted 1 month ago
4.0 - 8.0 years
25 - 30 Lacs
Pune
Hybrid
So, what’s t he r ole all about? As a Data Engineer, you will be responsible for designing, building, and maintaining large-scale data systems, as well as working with cross-functional teams to ensure efficient data processing and integration. You will leverage your knowledge of Apache Spark to create robust ETL processes, optimize data workflows, and manage high volumes of structured and unstructured data. How will you make an impact? Design, implement, and maintain data pipelines using Apache Spark for processing large datasets. Work with data engineering teams to optimize data workflows for performance and scalability. Integrate data from various sources, ensuring clean, reliable, and high-quality data for analysis. Develop and maintain data models, databases, and data lakes. Build and manage scalable ETL solutions to support business intelligence and data science initiatives. Monitor and troubleshoot data processing jobs, ensuring they run efficiently and effectively. Collaborate with data scientists, analysts, and other stakeholders to understand business needs and deliver data solutions. Implement data security best practices to protect sensitive information. Maintain a high level of data quality and ensure timely delivery of data to end-users. Continuously evaluate new technologies and frameworks to improve data engineering processes. Have you got what it takes? 8-11 years of experience as a Data Engineer, with a strong focus on Apache Spark and big data technologies. Expertise in Spark SQL , DataFrames , and RDDs for data processing and analysis. Proficient in programming languages such as Python , Scala , or Java for data engineering tasks. Hands-on experience with cloud platforms like AWS , specifically with data processing and storage services (e.g., S3 , BigQuery , Redshift , Databricks ). Experience with ETL frameworks and tools such as Apache Kafka , Airflow , or NiFi . Strong knowledge of data warehousing concepts and technologies (e.g., Redshift , Snowflake , BigQuery ). Familiarity with containerization technologies like Docker and Kubernetes . Knowledge of SQL and relational databases, with the ability to design and query databases effectively. Solid understanding of distributed computing, data modeling, and data architecture principles. Strong problem-solving skills and the ability to work with large and complex datasets. Excellent communication and collaboration skills to work effectively with cross-functional teams. You will have an advantage if you also have: Knowledge of SQL and relational databases, with the ability to design and query databases effectively. Solid understanding of distributed computing, data modeling, and data architecture principles. Strong problem-solving skills and the ability to work with large and complex datasets. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NiCE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NiCEr! Enjoy NiCE-FLEX! At NiCE, we work according to the NiCE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7235 Reporting into: Tech Manager Role Type: Individual Contributor
Posted 1 month ago
10.0 - 15.0 years
12 - 18 Lacs
Pune
Work from Office
Responsibilities: * Design and deliver corporate training programs using Python * Ensure proficiency in Python, Pyspark, data structures, NumPy, Pandas, Aws, Azure, GCP Cloud, Data visualization, Big Data tools * Experience in core python skills Food allowance Travel allowance House rent allowance
Posted 1 month ago
8.0 - 13.0 years
25 - 30 Lacs
Pune
Hybrid
1. Experienced with asynchronous programming, multithreading, implementing APIs, and Microservices, including Spring Boot 2. Proficiency with SQL Required Candidate profile 5+ years of professional experience in Java 8 or higher -Strong expertise in Spring Boot -Solid understanding of microservices architecture Kafka, Messaging/ streaming stack,Junit, Code Optimization,
Posted 1 month ago
3.0 - 6.0 years
6 - 16 Lacs
Noida, Mumbai (All Areas)
Hybrid
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Google BigQuery Minimum 3 year(s) of experience is required Educational Qualification : 15 years fulltime education Summary: As a Data Engineer, you will be responsible for designing, developing, and maintaining data solutions for data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across systems using Google BigQuery. Roles & Responsibilities: - Design, develop, and maintain data solutions for data generation, collection, and processing using Google BigQuery. - Create data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across systems. - Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs. - Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency. - Optimize data storage and retrieval processes to ensure efficient and effective use of resources. Professional & Technical Skills: - Must To Have Skills: Experience with Google BigQuery. - Good To Have Skills: Experience with ETL tools such as Apache NiFi or Talend. - Strong understanding of data modeling and database design principles. - Experience with SQL and NoSQL databases. - Experience with data warehousing and data integration technologies. - Familiarity with cloud computing platforms such as AWS or Google Cloud Platform.
Posted 1 month ago
2.0 - 6.0 years
4 - 9 Lacs
Hyderabad
Work from Office
Design and Develop Data Flows Integration with Data Sources Data Transformation Error Handling and Monitoring Performance Optimization Collaboration Documentation Security and Compliance Required Candidate profile Apache NiFi and data integration tools ETL concepts Data formats like JSON, XML, and Avro Programming languages such as Java, Python, or Groovy data storage solutions such as Hadoop, Kafka
Posted 1 month ago
12.0 - 15.0 years
55 - 60 Lacs
Ahmedabad, Chennai, Bengaluru
Work from Office
Dear Candidate, We are hiring a Data Platform Engineer to build and maintain scalable, secure, and reliable data infrastructure for analytics and real-time processing. Key Responsibilities: Design and manage data pipelines, storage layers, and ingestion frameworks. Build platforms for batch and streaming data processing (Spark, Kafka, Flink). Optimize data systems for scalability, fault tolerance, and performance. Collaborate with data engineers, analysts, and DevOps to enable data access. Enforce data governance, access controls, and compliance standards. Required Skills & Qualifications: Proficiency with distributed data systems (Hadoop, Spark, Kafka, Airflow). Strong SQL and experience with cloud data platforms (Snowflake, BigQuery, Redshift). Knowledge of data warehousing, lakehouse, and ETL/ELT pipelines. Experience with infrastructure as code and automation. Familiarity with data quality, security, and metadata management. Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Srinivasa Reddy Kandi Delivery Manager Integra Technologies
Posted 1 month ago
5.0 - 10.0 years
9 - 13 Lacs
Pune
Work from Office
Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : A Engineering graduate preferably Computer Science graduate 15 years of full time education Summary :Overall 7+ years of experience In Industry including 4 Years of experience As Developer using Big Data Technologies like Databricks/Spark and Hadoop Ecosystems - Hands on experience on Unified Data Analytics with Databricks, Databricks Workspace User Interface, Managing Databricks Notebooks, Delta Lake with Python, Delta Lake with Spark SQL - Good understanding of Spark Architecture with Databricks, Structured Streaming. Setting Up cloud platform with Databricks, Databricks Workspace- Working knowledge on distributed processing, data warehouse concepts, NoSQL, huge amount of data processing, RDBMS, Testing, Data management principles, Data mining and Data modellingAs a Data Platform Engineer, you will be responsible for assisting with the blueprint and design of the data platform components using Databricks Unified Data Analytics Platform. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Roles & Responsibilities:- Assist with the blueprint and design of the data platform components using Databricks Unified Data Analytics Platform.- Collaborate with Integration Architects and Data Architects to ensure cohesive integration between systems and data models.- Develop and maintain data pipelines using Databricks Unified Data Analytics Platform.- Troubleshoot and resolve issues related to data pipelines and data platform components.- Ensure data quality and integrity by implementing data validation and testing procedures. Professional & Technical Skills: - Must To Have Skills: Experience with Databricks Unified Data Analytics Platform.- Must To Have Skills: Strong understanding of data modeling and database design principles.- Good To Have Skills: Experience with Apache Spark and Hadoop.- Good To Have Skills: Experience with cloud-based data platforms such as AWS or Azure.- Proficiency in programming languages such as Python or Java.- Experience with data integration and ETL tools such as Apache NiFi or Talend. Additional Information:- The candidate should have a minimum of 5 years of experience in Databricks Unified Data Analytics Platform.- The ideal candidate will possess a strong educational background in computer science, software engineering, or a related field, along with a proven track record of delivering impactful data-driven solutions.- This position is based at our Chennai, Bengaluru, Hyderabad and Pune office. Qualification A Engineering graduate preferably Computer Science graduate 15 years of full time education
Posted 1 month ago
4.0 - 8.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Power Bi and AAS expert (Strong SC or Specialist Senior) Should have hands-on experience of Data Modelling in Azure SQL Data Warehouse and Azure Analysis Service Should be able to write and test Dex queries. Should be able generate Paginated Reports in Power BI Should have minimum 3 Years working experience in delivering projects in Power Bi Must Have:- 3 to 8 years of experience working on design, develop, and deploy ETL processes on Databricks to support data integration and transformation. Optimize and tune Databricks jobs for performance and scalability. Experience with Scala and/or Python programming languages. Proficiency in SQL for querying and managing data. Expertise in ETL (Extract, Transform, Load) processes. Knowledge of data modeling and data warehousing concepts. Implement best practices for data pipelines, including monitoring, logging, and error handling. Excellent problem-solving skills and attention to detail. Excellent written and verbal communication skills Strong analytical and problem-solving abilities. Experience in version control systems (e.g., Git) to manage and track changes to the codebase. Document technical designs, processes, and procedures related to Databricks development. Stay current with Databricks platform updates and recommend improvements to existing process. Good to Have:- Agile delivery experience. Experience with cloud services, particularly Azure (Azure Databricks), AWS (AWS Glue, EMR), or Google Cloud Platform (GCP). Knowledge of Agile and Scrum Software Development Methodologies. Understanding of data lake architectures. Familiarity with tools like Apache NiFi, Talend, or Informatica. Skills in designing and implementing data models.
Posted 1 month ago
4.0 - 6.0 years
2 - 6 Lacs
Bengaluru
Work from Office
Only for Immediate Joiners Core Responsibility: The project team will be spread between Paris and Bangalore. So, the candidate with an experience of 3-6 years is expected to work and coordinate on daily basis with the remote teams. Ability to learn new technology / framework / methodology. Hands-on individual responsible for producing excellent quality of code, adhering to expected coding standards and industry best practices. Must have strong knowledge and working experience on Big DATA ecosystem. Must have strong experience in SPARK/SCALA , NIFI, KAFKA, HIVE, PIG. Strong knowledge and experience working on HQL (hive Query Language) • Must have strong strong expertise in Debugging and Fixing Production Issues on BIG DATA eco System. Knowledge on code version management using Git & Jenkins, Nexus. • High levels of ownership and commitment on deliverables. Strong and Adaptive Communication Skills; Should be comfortable interacting with Paris counterparts to probe a technical problem or clarify requirement specifications. KEY SKILLS: Sound knowledge on SPARK/SCALA, NIFI, KAFKA - Must Have Sound Knowledge on HQL Knowledge on Kibana, Elastic Search Log stash Good to know Basic Awareness of CD/CI concepts & Technologies Big Data Ecosystem Good to know
Posted 1 month ago
6.0 - 8.0 years
6 - 12 Lacs
Hyderabad
Work from Office
Key Responsibilities: Design, develop, and maintain scalable data pipelines using Snowflake . Develop and optimize complex SQL queries , views, and stored procedures. Migrate data from legacy systems to Snowflake using ETL tools like Informatica, Talend, dbt, or Matillion . Implement data modeling techniques (Star, Snowflake schemas) and maintain data dictionary. Ensure performance tuning, data quality, and security across all Snowflake objects. Integrate Snowflake with BI tools like Tableau, Power BI , or Looker . Collaborate with data analysts, data scientists, and business teams to understand requirements and deliver solutions. Monitor and manage Snowflake environments using tools like SnowSight, Snowsql , or CloudWatch . Participate in code reviews and enforce best practices for data governance and security. Develop automation scripts using Python, Shell , or Airflow for data workflows. Required Skills: 6+ years of experience in data engineering / data warehousing . 3+ years hands-on experience with Snowflake Cloud Data Platform . Strong expertise in SQL, performance tuning, data modeling, and query optimization . Experience with ETL tools like Informatica, Talend, Apache NiFi , or dbt . Proficient in cloud platforms: AWS / Azure / GCP (preferably AWS). Good understanding of DevOps/CI-CD principles for Snowflake deployments. Hands-on experience with scripting languages: Python, Bash, etc. Knowledge of RBAC, masking policies, row access policies in Snowflake.
Posted 1 month ago
4.0 - 9.0 years
6 - 16 Lacs
Coimbatore
Work from Office
Position Name: Data Engineer Location: Coimbatore (Hybrid 3 days per week) Work Shift Timing: 1.30 pm to 10.30 pm (IST) Mandatory Skills: Hadoop, Spark, Python, Data bricks Good to have: Java/Scala The Role: • Designing and building optimized data pipelines using cutting-edge technologies in a cloud environment to drive analytical insights. • Constructing infrastructure for efficient ETL processes from various sources and storage systems. • Leading the implementation of algorithms and prototypes to transform raw data into useful information. • Architecting, designing, and maintaining database pipeline architectures, ensuring readiness for AI/ML transformations. • Creating innovative data validation methods and data analysis tools. • Ensuring compliance with data governance and security policies. • Interpreting data trends and patterns to establish operational alerts. • Developing analytical tools, programs, and reporting mechanisms. • Conducting complex data analysis and presenting results effectively. • Preparing data for prescriptive and predictive modeling. • Continuously exploring opportunities to enhance data quality and reliability. • Applying strong programming and problem-solving skills to develop scalable solutions. Requirements: • Experience in the Big Data technologies (Hadoop, Spark, Nifi, Impala). • Hands-on experience designing, building, deploying, testing, maintaining, monitoring, and owning scalable, resilient, and distributed data pipelines. • High proficiency in Scala/Java and Spark for applied large-scale data processing • Expertise with big data technologies, including Spark, Data Lake, and Hive. • Solid understanding of batch and streaming data processing techniques. • Proficient knowledge of the Data Lifecycle Management process, including data collection, access, use, storage, transfer, and deletion. • Expert-level ability to write complex, optimized SQL queries across extensive data volumes. • Experience on HDFS, Nifi, Kafka. • Experience on Apache Ozone, Delta Tables, Databricks, Axon(Kafka), Spring Batch, Oracle DB • Familiarity with Agile methodologies. • Obsession for service observability, instrumentation, monitoring, and alerting. • Knowledge or experience in architectural best practices for building data lakes Interested candidates can share their resume at Neesha1@damcogroup.com
Posted 1 month ago
4.0 - 9.0 years
3 - 7 Lacs
Pune
Work from Office
Req ID: 324609 We are currently seeking a Data Engineer to join our team in Pune, Mahrshtra (IN-MH), India (IN). Key Responsibilities: Design and implement tailored data solutions to meet customer needs and use cases, spanning from streaming to data lakes, analytics, and beyond within a dynamically evolving technical stack. Provide thought leadership by recommending the most appropriate technologies and solutions for a given use case, covering the entire spectrum from the application layer to infrastructure. Demonstrate proficiency in coding skills, utilizing languages such as Python, Java, and Scala to efficiently move solutions into production while prioritizing performance, security, scalability, and robust data integrations. Collaborate seamlessly across diverse technical stacks, including Cloudera, Databricks, Snowflake, and AWS. Develop and deliver detailed presentations to effectively communicate complex technical concepts. Generate comprehensive solution documentation, including sequence diagrams, class hierarchies, logical system views, etc. Adhere to Agile practices throughout the solution development process. Design, build, and deploy databases and data stores to support organizational requirements. Basic Qualifications: 4+ years of experience supporting Software Engineering, Data Engineering, or Data Analytics projects. 2+ years of experience leading a team supporting data related projects to develop end-to-end technical solutions. Experience with Informatica, Python, Databricks, Azure Data Engineer Ability to travel at least 25%." Preferred Skills: Demonstrate production experience in core data platforms such as Snowflake, Databricks, AWS, Azure, GCP, Hadoop, and more. Possess hands-on knowledge of Cloud and Distributed Data Storage, including expertise in HDFS, S3, ADLS, GCS, Kudu, ElasticSearch/Solr, Cassandra, or other NoSQL storage systems. Exhibit a strong understanding of Data integration technologies, encompassing Informatica, Spark, Kafka, eventing/streaming, Streamsets, NiFi, AWS Data Migration Services, Azure DataFactory, Google DataProc. Showcase professional written and verbal communication skills to effectively convey complex technical concepts. Undergraduate or Graduate degree preferred
Posted 1 month ago
4.0 - 9.0 years
3 - 7 Lacs
Pune
Work from Office
Req ID: 324653 We are currently seeking a Data Engineer to join our team in Pune, Mahrshtra (IN-MH), India (IN). Key Responsibilities: Design and implement tailored data solutions to meet customer needs and use cases, spanning from streaming to data lakes, analytics, and beyond within a dynamically evolving technical stack. Provide thought leadership by recommending the most appropriate technologies and solutions for a given use case, covering the entire spectrum from the application layer to infrastructure. Demonstrate proficiency in coding skills, utilizing languages such as Python, Java, and Scala to efficiently move solutions into production while prioritizing performance, security, scalability, and robust data integrations. Collaborate seamlessly across diverse technical stacks, including Cloudera, Databricks, Snowflake, and AWS. Develop and deliver detailed presentations to effectively communicate complex technical concepts. Generate comprehensive solution documentation, including sequence diagrams, class hierarchies, logical system views, etc. Adhere to Agile practices throughout the solution development process. Design, build, and deploy databases and data stores to support organizational requirements. Basic Qualifications: 4+ years of experience supporting Software Engineering, Data Engineering, or Data Analytics projects. 2+ years of experience leading a team supporting data related projects to develop end-to-end technical solutions. Experience with Informatica, Python, Databricks, Azure Data Engineer Ability to travel at least 25%." Preferred Skills: Demonstrate production experience in core data platforms such as Snowflake, Databricks, AWS, Azure, GCP, Hadoop, and more. Possess hands-on knowledge of Cloud and Distributed Data Storage, including expertise in HDFS, S3, ADLS, GCS, Kudu, ElasticSearch/Solr, Cassandra, or other NoSQL storage systems. Exhibit a strong understanding of Data integration technologies, encompassing Informatica, Spark, Kafka, eventing/streaming, Streamsets, NiFi, AWS Data Migration Services, Azure DataFactory, Google DataProc. Showcase professional written and verbal communication skills to effectively convey complex technical concepts. Undergraduate or Graduate degree preferred
Posted 1 month ago
4.0 - 9.0 years
3 - 7 Lacs
Chennai
Work from Office
Req ID: 324631 We are currently seeking a Data Engineer to join our team in Chennai, Tamil Ndu (IN-TN), India (IN). Key Responsibilities: Design and implement tailored data solutions to meet customer needs and use cases, spanning from streaming to data lakes, analytics, and beyond within a dynamically evolving technical stack. Provide thought leadership by recommending the most appropriate technologies and solutions for a given use case, covering the entire spectrum from the application layer to infrastructure. Demonstrate proficiency in coding skills, utilizing languages such as Python, Java, and Scala to efficiently move solutions into production while prioritizing performance, security, scalability, and robust data integrations. Collaborate seamlessly across diverse technical stacks, including Cloudera, Databricks, Snowflake, and AWS. Develop and deliver detailed presentations to effectively communicate complex technical concepts. Generate comprehensive solution documentation, including sequence diagrams, class hierarchies, logical system views, etc. Adhere to Agile practices throughout the solution development process. Design, build, and deploy databases and data stores to support organizational requirements. Basic Qualifications: 4+ years of experience supporting Software Engineering, Data Engineering, or Data Analytics projects. 2+ years of experience leading a team supporting data related projects to develop end-to-end technical solutions. Experience with Informatica, Python, Databricks, Azure Data Engineer Ability to travel at least 25%." Preferred Skills: Demonstrate production experience in core data platforms such as Snowflake, Databricks, AWS, Azure, GCP, Hadoop, and more. Possess hands-on knowledge of Cloud and Distributed Data Storage, including expertise in HDFS, S3, ADLS, GCS, Kudu, ElasticSearch/Solr, Cassandra, or other NoSQL storage systems. Exhibit a strong understanding of Data integration technologies, encompassing Informatica, Spark, Kafka, eventing/streaming, Streamsets, NiFi, AWS Data Migration Services, Azure DataFactory, Google DataProc. Showcase professional written and verbal communication skills to effectively convey complex technical concepts. Undergraduate or Graduate degree preferred
Posted 1 month ago
4.0 - 9.0 years
3 - 7 Lacs
Chennai
Work from Office
Req ID: 324632 We are currently seeking a Data Engineer to join our team in Chennai, Tamil Ndu (IN-TN), India (IN). Key Responsibilities: Design and implement tailored data solutions to meet customer needs and use cases, spanning from streaming to data lakes, analytics, and beyond within a dynamically evolving technical stack. Provide thought leadership by recommending the most appropriate technologies and solutions for a given use case, covering the entire spectrum from the application layer to infrastructure. Demonstrate proficiency in coding skills, utilizing languages such as Python, Java, and Scala to efficiently move solutions into production while prioritizing performance, security, scalability, and robust data integrations. Collaborate seamlessly across diverse technical stacks, including Cloudera, Databricks, Snowflake, and AWS. Develop and deliver detailed presentations to effectively communicate complex technical concepts. Generate comprehensive solution documentation, including sequence diagrams, class hierarchies, logical system views, etc. Adhere to Agile practices throughout the solution development process. Design, build, and deploy databases and data stores to support organizational requirements. Basic Qualifications: 4+ years of experience supporting Software Engineering, Data Engineering, or Data Analytics projects. 2+ years of experience leading a team supporting data related projects to develop end-to-end technical solutions. Experience with Informatica, Python, Databricks, Azure Data Engineer Ability to travel at least 25%." Preferred Skills: Demonstrate production experience in core data platforms such as Snowflake, Databricks, AWS, Azure, GCP, Hadoop, and more. Possess hands-on knowledge of Cloud and Distributed Data Storage, including expertise in HDFS, S3, ADLS, GCS, Kudu, ElasticSearch/Solr, Cassandra, or other NoSQL storage systems. Exhibit a strong understanding of Data integration technologies, encompassing Informatica, Spark, Kafka, eventing/streaming, Streamsets, NiFi, AWS Data Migration Services, Azure DataFactory, Google DataProc. Showcase professional written and verbal communication skills to effectively convey complex technical concepts. Undergraduate or Graduate degree preferred
Posted 1 month ago
15.0 - 20.0 years
6 - 10 Lacs
Mumbai
Work from Office
LocationMumbai Experience15+ years in data engineering/architecture Role Overview: Lead the architectural design and implementation of a secure, scalable Cloudera-based Data Lakehouse for one of India’s top public sector banks. Key Responsibilities: * Design end-to-end Lakehouse architecture on Cloudera * Define data ingestion, processing, storage, and consumption layers * Guide data modeling, governance, lineage, and security best practices * Define migration roadmap from existing DWH to CDP * Lead reviews with client stakeholders and engineering teams Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Skills Required: * Proven experience with Cloudera CDP, Spark, Hive, HDFS, Iceberg * Deep understanding of Lakehouse patterns and data mesh principles * Familiarity with data governance tools (e.g., Apache Atlas, Collibra) * Banking/FSI domain knowledge highly desirable
Posted 1 month ago
3.0 - 8.0 years
9 - 13 Lacs
Mumbai
Work from Office
Role Overview : As a Big Data Engineer, you'll design and build robust data pipelines on Cloudera using Spark (Scala/PySpark) for ingestion, transformation, and processing of high-volume data from banking systems. Key Responsibilities : Build scalable batch and real-time ETL pipelines using Spark and Hive Integrate structured and unstructured data sources Perform performance tuning and code optimization Support orchestration and job scheduling (NiFi, Airflow) Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience 3-15 Years Proficiency in PySpark/Scala with Hive/Impala Experience with data partitioning, bucketing, and optimization Familiarity with Kafka, Iceberg, NiFi is a must Knowledge of banking or financial datasets is a plus
Posted 1 month ago
15.0 - 20.0 years
5 - 9 Lacs
Mumbai
Work from Office
Location Mumbai Role Overview : As a Big Data Engineer, you'll design and build robust data pipelines on Cloudera using Spark (Scala/PySpark) for ingestion, transformation, and processing of high-volume data from banking systems. Key Responsibilities : Build scalable batch and real-time ETL pipelines using Spark and Hive Integrate structured and unstructured data sources Perform performance tuning and code optimization Support orchestration and job scheduling (NiFi, Airflow) Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience3–15 years Proficiency in PySpark/Scala with Hive/Impala Experience with data partitioning, bucketing, and optimization Familiarity with Kafka, Iceberg, NiFi is a must Knowledge of banking or financial datasets is a plus
Posted 1 month ago
3.0 - 5.0 years
3 - 7 Lacs
Gurugram
Work from Office
About the Opportunity Job TypeApplication 23 June 2025 Title Expert Engineer Department GPS Technology Location Gurugram, India Reports To Project Manager Level Grade 4 Were proud to have been helping our clients build better financial futures for over 50 years. How have we achieved thisBy working together - and supporting each other - all over the world. So, join our [insert name of team/ business area] team and feel like your part of something bigger. About your team The Technology function provides IT services to the Fidelity International business, globally. These include the development and support of business applications that underpin our revenue, operational, compliance, finance, legal, customer service and marketing functions. The broader technology organisation incorporates Infrastructure services that the firm relies on to operate on a day-to-day basis including data centre, networks, proximity services, security, voice, incident management and remediation. About your role Expert engineer is a seasoned technology expert who is highly skilled in programming, engineering and problem-solving skills. They can deliver value to business faster and with superlative quality. Their code and designs meet business, technical, non-functional and operational requirements most of the times without defects and incidents. So, if relentless focus and drive towards technical and engineering excellence along with adding value to business excites you, this is absolutely a role for you. If doing technical discussions and whiteboarding with peers excites you and doing pair programming and code reviews adds fuel to your tank, come we are looking for you. Understand system requirements, analyse, design, develop and test the application systems following the defined standards. The candidate is expected to display professional ethics in his/her approach to work and exhibit a high-level ownership within a demanding working environment. About you Essential Skills You have excellent software designing, programming, engineering, and problem-solving skills. Strong experience working on Data Ingestion, Transformation and Distribution using AWS or Snowflake Exposure to SnowSQL, Snowpipe, Role based access controls, ETL / ELT tools like Nifi, Matallion / DBT Hands on working knowledge around EC2, Lambda, ECS/EKS, DynamoDB, VPCs Familiar with building data pipelines that leverage the full power and best practices of Snowflake as well as how to integrate common technologies that work with Snowflake (code CICD, monitoring, orchestration, data quality, monitoring) Experience with designing, implementing, and overseeing the integration of data systems and ETL processes through Snaplogic Designing Data Ingestion and Orchestration Pipelines using AWS, Control M Establish strategies for data extraction, ingestion, transformation, automation, and consumption. Experience in Data Lake Concepts with Structured, Semi-Structured and Unstructured Data Experience in creating CI/CD Process for Snowflake Experience in strategies for Data Testing, Data Quality, Code Quality, Code Coverage Ability, willingness & openness to experiment / evaluate / adopt new technologies. Passion for technology, problem solving and team working. Go getter, ability to navigate across roles, functions, business units to collaborate, drive agreements and changes from drawing board to live systems. Lifelong learner who can bring the contemporary practices, technologies, ways of working to the organization. Effective collaborator adept at using all effective modes of communication and collaboration tools. Experience delivering on data related Non-Functional like- Hands-on experience dealing with large volumes of historical data across markets/geographies. Manipulating, processing, and extracting value from large, disconnected datasets. Building water-tight data quality gateson investment management data Generic handling of standard business scenarios in case of missing data, holidays, out of tolerance errorsetc. Experience and Qualification B.E./ B.Tech. or M.C.A. in Computer Science from a reputed University Total 7 to 10 years of relevant experience Personal Characteristics Good interpersonal and communication skills. Strong team player Ability to work at a strategic and tactical level. Ability to convey strong messages in a polite but firm manner. Self-motivation is essential, should demonstrate commitment to high quality design and development. Ability to develop & maintain working relationships with several stakeholders. Flexibility and an open attitude to change. Problem solving skills with the ability to think laterally, and to think with a medium term and long-term perspective. Ability to learn and quickly get familiar with a complex business and technology environment. Feel rewarded For starters, well offer you a comprehensive benefits package. Well value your wellbeing and support your development. And well be as flexible as we can about where and when you work finding a balance that works for all of us. Its all part of our commitment to making you feel motivated by the work you do and happy to be part of our team.
Posted 1 month ago
10.0 - 15.0 years
25 - 40 Lacs
Mumbai
Work from Office
Overview of the Company: Jio Platforms Ltd. is a revolutionary Indian multinational tech company, often referred to as India's biggest startup, headquartered in Mumbai. Launched in 2019, it's the powerhouse behind Jio, India's largest mobile network with over 400 million users. But Jio Platforms is more than just telecom. It's a comprehensive digital ecosystem, developing cutting-edge solutions across media, entertainment, and enterprise services through popular brands like JioMart, JioFiber, and JioSaavn. Join us at Jio Platforms and be part of a fast-paced, dynamic environment at the forefront of India's digital transformation. Collaborate with brilliant minds to develop next-gen solutions that empower millions and revolutionize industries. Team Overview: The Data Platforms Team is the launchpad for a data-driven future, empowering the Reliance Group of Companies. We're a passionate group of experts architecting an enterprise-scale data mesh to unlock the power of big data, generative AI, and ML modelling across various domains. We don't just manage data we transform it into intelligent actions that fuel strategic decision-making. Imagine crafting a platform that automates data flow, fuels intelligent insights, and empowers the organization that's what we do. Join our collaborative and innovative team, and be a part of shaping the future of data for India's biggest digital revolution! About the role. Title : Lead Data Engineer Location: Mumbai Responsibilities: End-to-End Data Pipeline Development: Design, build, optimize, and maintain robust data pipelines across cloud, on-premises, or hybrid environments, ensuring performance, scalability, and seamless data flow. Reusable Components & Frameworks: Develop reusable data pipeline components and contribute to the team's data pipeline framework evolution. Data Architecture & Solutions: Contribute to data architecture design, applying data modelling, storage, and retrieval expertise. Data Governance & Automation: Champion data integrity, security, and efficiency through metadata management, automation, and data governance best practices. Collaborative Problem Solving: Partner with stakeholders, data teams, and engineers to define requirements, troubleshoot, optimize, and deliver data-driven insights. Mentorship & Knowledge Transfer: Guide and mentor junior data engineers, fostering knowledge sharing and professional growth. Qualification Details: Education: Bachelor's degree or higher in Computer Science, Data Science, Engineering, or a related technical field. Core Programming: Excellent command of a primary data engineering language (Scala, Python, or Java) with a strong foundation in OOPS and functional programming concepts. Big Data Technologies: Hands-on experience with data processing frameworks (e.g., Hadoop, Spark, Apache Hive, NiFi, Ozone, Kudu), ideally including streaming technologies (Kafka, Spark Streaming, Flink, etc.). Database Expertise: Excellent querying skills (SQL) and strong understanding of relational databases (e.g., MySQL, PostgreSQL). Experience with NoSQL databases (e.g., MongoDB, Cassandra) is a plus. End-to-End Pipelines: Demonstrated experience in implementing, optimizing, and maintaining complete data pipelines, integrating varied sources and sinks including streaming real-time data. Cloud Expertise: Knowledge of Cloud Technologies like Azure HDInsights, Synapse, EventHub and GCP DataProc, Dataflow, BigQuery. CI/CD Expertise: Experience with CI/CD methodologies and tools, including strong Linux and shell scripting skills for automation. Desired Skills & Attributes: Problem-Solving & Troubleshooting: Proven ability to analyze and solve complex data problems, troubleshoot data pipeline issues effectively. Communication & Collaboration: Excellent communication skills, both written and verbal, with the ability to collaborate across teams (data scientists, engineers, stakeholders). Continuous Learning & Adaptability: A demonstrated passion for staying up-to-date with emerging data technologies and a willingness to adapt to new tools.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough