Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
7 - 12 years
22 - 32 Lacs
Delhi NCR, Gurgaon, Noida
Work from Office
Nifi, Altery, Apache Nifi– Must have Apache airflow– good to have PowerBI – Must have Glue - good to have Step Functions - good to have Lambda - good to have
Posted 2 months ago
1 - 5 years
8 - 18 Lacs
Navi Mumbai, Mumbai, Delhi
Work from Office
Below is the JD for Click House Database Helping build production-grade systems based on ClickHouse: advise how to design schemas, plan clusters etc. Environments range from single node setups to clusters with 100s of nodes, Cloud, managed ClickHouse service. Working on infrastructure projects related to ClickHouse Improving ClickHouse itself – fixing bugs, improving docs, creating test-cases, etc. Studying new usage patterns, ClickHouse functions, & integration with other products. Working with the community – GitHub, Stack Overflow, Telegram. Installation multiple node cluster , configure, backup and recovery and maintain ClickHouse database. Monitor and optimize database performance, ensuring high availability and responsiveness. Troubleshoot database issues, identify and resolve performance bottlenecks. Design and implement database backup and recovery strategies. Develop and implement database security policies and procedures. Collaborate with development teams to optimize database schema design and queries. Provide technical guidance and support to development and operations teams. Experience with big data stack components like Hadoop, Spark, Kafka, Nifi, Experience with data science/data analysis Knowledge of SRE / DevOP stacks – monitoring/system management tools (Prometheus, Ansible, ELK, ) Version control using git Handling support calls from customers using ClickHouse. This includes diagnosing problems connecting to ClickHouse, designing applications, deploying/upgrading ClickHouse, and operations
Posted 2 months ago
6 - 11 years
2 - 2 Lacs
Chennai, Bengaluru, Hyderabad
Work from Office
We are looking for a skilled Data Lead to design, implement, and manage data pipelines and real-time data processing solutions. The ideal candidate will have hands-on experience with cloud platforms, data technologies, and tools like Snowflake, Apache Airflow, Kafka, and real-time streaming technologies. Key Responsibilities : Build and manage scalable data pipelines and real-time streaming data solutions. Work with cross-functional teams to ensure data is accessible for analytics and business intelligence. Optimize data workflows for high performance and reliability. Implement cloud-based solutions using AWS, Azure, or GCP. Lead and mentor a team of data engineers, ensuring best practices are followed. Troubleshoot data pipeline issues and improve data system performance. Must-Have Skills : Programming : Python (preferred), Java, or Scala. Cloud : AWS (preferred), Azure, or GCP. Data Warehousing : Snowflake. Data Orchestration : Apache Airflow (preferred), Prefect, or Dagster. Messaging/Streaming : Kafka (preferred), AWS SQS, or Google Cloud Pub/Sub. Real-Time Processing : Apache Flink (preferred), Apache Spark Streaming, or Kafka Streams.
Posted 3 months ago
4 - 9 years
6 - 11 Lacs
Bengaluru
Work from Office
About The Role : About The Role :: The Intel Foundry Manufacturing and Supply chain FMSC Automation team is looking for a highly motivated Big Data Engineer with strong data engineering skills for data integration of various manufacturing data. You will be responsible for engaging with customers and driving development from ideation to deployment and beyond. This position is a technical role that requires the direct design and development of robust, scalable, performant systems for world-class manufacturing data engineering. Responsibilities include: Create and maintain optimal data architectureAssemble large, complex data sets that meet functional and non-functional business requirementsIdentify, design, and implement internal process improvements:automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sourcesWork with stakeholders including the users, cross functional teams to assist with data-related technical issues and support their data infrastructure needs.Standard process to keep data secure with right access and authorizationFocus on automated testing and robust monitoringThe ideal candidate must exhibit the following behavioral traits:Excellent problem solving and interpersonal communication skillsStrong desire to learn and share knowledge with others.Be inquisitive, innovative, and a team player with a strong focus on quality workmanship.Troubleshooting skills and root cause analysis for performance issuesAbility to lean, adopt and implement new skills to drive innovation and excellence.Ability to work with cross functional teams in dynamic environment Qualifications Minimum Qualifications: A bachelor's with 4+ years of experience in related field Experience building and optimizing big data pipelines Experience with skills pf handling unstructured data Experience with data transformations, structures, metadata, workload management Experience with big data tools:Spark, Kafka, NIFI, etc. Experience with at least programming languages:Python, C#, .NET Experience with relational SQL and NOSQL DBs Experience in leveraging open-source packages Experience in cloud native skills such as Docker, Kubernetes, Rancher etc. Good to have skills: Experience with semiconductor manufacturingExperience of data engineering on cloudExperience in developing AI/ML Solutions Inside this Business Group As the world's largest chip manufacturer, Intel strives to make every facet of semiconductor manufacturing state-of-the-art -- from semiconductor process development and manufacturing, through yield improvement to packaging, final test and optimization, and world class Supply Chain and facilities support. Employees in theTechnology Development and Manufacturing Groupare part of a worldwide network of design, development, manufacturing, and assembly/test facilities, all focused on utilizing the power of Moore's Law to bring smart, connected devices to every person on Earth.
Posted 3 months ago
2 - 5 years
4 - 7 Lacs
Hyderabad
Work from Office
Hands-on experience in Apache NiFi for data integration and workflow automation. Senior-level Java programming knowledge+ including experience in developing custom NiFi processors and extensions. Strong knowledge of cloud platforms (e.g.+ AWS+ Azure+ GCP) and their data services (e.g.+ S3+ EC2+ Lambda+ Azure Data Lake+ etc.). Proficiency in Linux environments+ including shell scripting and system administration. Experience with Apache Kafka for real-time data streaming and event-driven architectures. Hands-on experience with MongoDB for NoSQL data management. Familiarity with Goldengate for real-time data replication and integration. Experience in performance tuning and optimization of NiFi workflows. Solid understanding of data engineering concepts+ including ETL+ELT+ data lakes+ and data warehouses. Ability to work independently and deliver results in a fast-paced+ high-pressure environment. Excellent problem-solving+ debugging+ and analytical skills. Good-to-Have Skills: Experience with containerization tools like Docker and Kubernetes. Knowledge of DevOps practices and CI+CD pipelines. Familiarity with big data technologies like Hadoop+ Spark+ or Kafka. Understanding of security best practices for data pipelines and cloud environments.
Posted 3 months ago
7 - 10 years
10 - 18 Lacs
Chennai
Work from Office
Experience: 7+ years of overall experience in data engineering roles, with at least 2+ years in a leadership capacity. Proven expertise in designing and deploying large-scale data systems and pipelines. Technical Skills: Proficiency in Python, Java, or Scala for data engineering tasks. Strong SQL skills for querying and optimizing large datasets. Experience with data processing frameworks like Apache Spark, Beam, or Flink. Hands-on experience with ETL tools like Apache NiFi, dbt, or Talend. Experience in pub sub and stream processing using Kafka/Kinesis or the like Cloud Platforms: Expertise in one or more cloud platforms (AWS, Azure, GCP) with a focus on data-related services. Data Modeling: Strong understanding of data modeling techniques (dimensional modeling, star/snowflake schemas). Collaboration: Proven ability to work with cross-functional teams and translate business requirements into technical solutions. Preferred Skills: Familiarity with data visualization tools like Tableau or Power BI to support reporting teams. Knowledge of MLOps pipelines and collaboration with data scientists.
Posted 3 months ago
0 - 2 years
4 - 8 Lacs
Ahmedabad
Work from Office
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Data Engineering Good to have skills : NA Minimum 0-2 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions for data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across systems. You will play a crucial role in managing and optimizing data infrastructure to support the organization's data needs. Roles & Responsibilities: Expected to build knowledge and support the team. Participate in Problem Solving discussions. Design and develop data pipelines to extract, transform, and load data from various sources. Ensure data quality and integrity by implementing data validation and cleansing processes. Collaborate with cross-functional teams to understand data requirements and design efficient data solutions. Optimize and tune data pipelines for performance and scalability. Troubleshoot and resolve data-related issues and incidents. Stay updated with the latest trends and technologies in data engineering and recommend improvements to existing processes. Additional responsibility:Mentor and guide junior professionals in data engineering best practices. Professional & Technical Skills: Must To Have Skills:Proficiency in Data Engineering. Strong understanding of data modeling and database design principles. Experience with ETL tools such as Apache NiFi or Talend. Familiarity with cloud platforms such as AWS or Azure. Good To Have Skills:Experience with big data technologies such as Hadoop or Spark. Knowledge of data warehousing concepts and techniques. Experience with SQL and NoSQL databases. Solid understanding of data governance and security principles. Additional Information: The candidate should have a minimum of 0-2 years of experience in Data Engineering. This position is based at our Ahmedabad office. A 15 years full-time education is required. Qualification 15 years full time education
Posted 3 months ago
10 - 15 years
13 - 18 Lacs
Coimbatore
Work from Office
Project Role : Application Architect Project Role Description : Provide functional and/or technical expertise to plan, analyze, define and support the delivery of future functional and technical capabilities for an application or group of applications. Assist in facilitating impact assessment efforts and in producing and reviewing estimates for client work requests. Must have skills : Cognite Data Fusion Good to have skills : NA Minimum 10+ year(s) of experience is required Educational Qualification : 1 Minimum 15 years of full time education Project Role :Application Architect Project Role Description :Provide functional and/or technical expertise to plan, analyze, define and support the delivery of future functional and technical capabilities for an application or group of applications. Assist in facilitating impact assessment efforts and in producing and reviewing estimates for client work requests. Must have Skills :Cognite Data Fusion, SSI: NON SSI:Good to Have Skills :SSI:No Technology Specialization NON SSI :Job Requirements :Key Responsibilities :1 Architecting and Designing Operations Technology data for manufacturing and Product development industries. 2 Interact with Business and Technology stakeholders for Industry data model Design and overall Data landscape architecture . 3 Designing and implementing modern manufaturing data Technology platforms 4 Work closely with the deal teams to shape and solution data programs, continuously engage with key client executives and share solution details Technical Experience :1 Deep expertise in applying Analytics or Implemented Digital Twin for two or more industries using Cognite CDF or Similar Products 2 The right data transformation and source connector technology is used in the right way Eg Databricks, Azure, GCP, AWS, IoT, Kafka, NiFi, MQTT etc 3 Handling IoT , streaming analytics and IT OT data 4 Well versed with OT data quality, Data modelling , data governance and data contextulization 5 Advise in the planning, design, management, execution Professional Attributes :1 Good Communication skill 2 Good analytical skill Educational Qualification:1 Minimum 15 years of full time educationAdditional Info : Qualification 1 Minimum 15 years of full time education
Posted 3 months ago
3 - 6 years
20 - 25 Lacs
Pune, Bengaluru, Hyderabad
Hybrid
Preferred candidate profile Mandatory 3+ hands-on working experience on Apache NiFi + Apache Kafka Good to have Google Cloud Big Query Scripting Skills or Snowflake AWS or Cloud SQL Knowledge Having working experience in Scala/Java is an added advantage Mandatory SQL PL/SQL Scripting Experience Mandatory anyone of Python/Linux/Unix Skills Should be a quick learner and have good troubleshooting skills. Having finance domain knowledge for delivering the requirements in a short time will be an added advantage
Posted 3 months ago
4 - 5 years
10 - 15 Lacs
Navi Mumbai, Bhopal, Gurgaon
Work from Office
Role & responsibilities : Design and maintain ETL workflows for high-volume SMS traffic. Build real-time and batch processing pipelines using Kafka/Spark/Flink/Hadoop. Optimize SQL/NoSQL databases for high-speed data retrieval. Implement data encryption, masking, and access control for security and compliance. Automate data ingestion and processing using Python, Airflow, or Shell scripting. ETL tools: Design and maintain ETL using Python / Apache NiFi / Airflow / Talend / AWS Glue or any orchestration tool. SQL query tuning and database performance optimization. OLAP & OLTP systems, ensuring efficient data lakes and data warehousing. Big data & streaming: Kafka / Spark / Flink / Hadoop or similar Databases: MySQL / PostgreSQL / MongoDB / Cassandra, Redis or similar Programming: Python, SQL, Scala, Java. Education: Education level: B.Tech / M.Tech / MCA Experience: 04 - 05 relevant years Preferred Qualifications: Experience in cloud-based data services (AWS, GCP, Azure). Knowledge of AI-driven data pipelines & real-time fraud detection. Certifications in Data Engineering (Google, AWS, etc.). This will be 100% Work from Office
Posted 3 months ago
8 - 13 years
18 - 27 Lacs
Bengaluru
Work from Office
About Persistent We are a trusted Digital Engineering and Enterprise Modernization partner, combining deep technical expertise and industry experience to help our clients anticipate what’s next. Our offerings and proven solutions create a unique competitive advantage for our clients by giving them the power to see beyond and rise above. We work with many industry-leading organizations across the world including 12 of the 30 most innovative US companies, 80% of the largest banks in the US and India, and numerous innovators across the healthcare ecosystem. Our growth trajectory continues, as we reported $1,231M annual revenue (16% Y-o-Y). Along with our growth, we’ve onboarded over 4900 new employees in the past year, bringing our total employee count to over 23,500+ people located in 19 countries across the globe. Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. For more details please login to www.persistent.com About The Position We are looking for a Data Architect with creativity and results-oriented critical thinking to meet complex challenges and develop new strategies for acquiring, analyzing, modeling and storing data. In this role you will guide the company into the future and utilize the latest technology and information management methodologies to meet our requirements for effective logical data modeling, metadata management and database warehouse domains. You will be working with experts in a variety of industries, including computer science and software development, as well as department heads and senior executives to integrate new technologies and refine system performance. We reward dedicated performance with exceptional pay and benefits, as well as tuition reimbursement and career growth opportunities. What You?ll Do Define data retention policies Monitor performance and advise any necessary infrastructure changes Mentor junior engineers and work with other architects to deliver best in class solutions Implement ETL / ELT process and orchestration of data flows Recommend and drive adoption of newer tools and techniques from the big data ecosystem Expertise You?ll Bring 10+ years in industry, building and managing big data systems Building, monitoring, and optimizing reliable and cost-efficient pipelines for SaaS is a must Building stream-processing systems, using solutions such as Storm or Spark-Streaming Dealing and integrating with data storage systems like SQL and NoSQL databases, file systems and object storage like s3 Reporting solutions like Pentaho, PowerBI, Looker including customizations Developing high concurrency, high performance applications that are database-intensive and have interactive, browser-based clients Working with SaaS based data management products will be an added advantage Proficiency and expertise in Cloudera / Hortonworks Spark HDF and NiFi RDBMS, NoSQL like Vertica, Redshift, Data Modelling with physical design and SQL performance optimization Messaging systems, JMS, Active MQ, Rabbit MQ, Kafka Big Data technology like Hadoop, Spark, NoSQL based data-warehousing solutions Data warehousing, reporting including customization, Hadoop, Spark, Kafka, Core java, Spring/IOC, Design patterns Big Data querying tools, such as Pig, Hive, and Impala Open-source technologies and databases (SQL & NoSQL) Proficient understanding of distributed computing principles Ability to solve any ongoing issues with operating the cluster Scale data pipelines using open-source components and AWS services Cloud (AWS), provisioning, capacity planning and performance analysis at various levels Web-based SOA architecture implementation with design pattern experience will be an added advantage Benefits Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage : group term life , personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment •We offer hybrid work options and flexible working hours to accommodate various needs and preferences. •Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. Let's unleash your full potential. See Beyond, Rise Above.
Posted 3 months ago
8 - 13 years
18 - 30 Lacs
Pune
Work from Office
About Persistent We are a trusted Digital Engineering and Enterprise Modernization partner, combining deep technical expertise and industry experience to help our clients anticipate what’s next. Our offerings and proven solutions create a unique competitive advantage for our clients by giving them the power to see beyond and rise above. We work with many industry-leading organizations across the world including 12 of the 30 most innovative US companies, 80% of the largest banks in the US and India, and numerous innovators across the healthcare ecosystem. Our growth trajectory continues, as we reported $1,231M annual revenue (16% Y-o-Y). Along with our growth, we’ve onboarded over 4900 new employees in the past year, bringing our total employee count to over 23,500+ people located in 19 countries across the globe. Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. For more details please login to www.persistent.com About The Position We are looking for a Data Architect with creativity and results-oriented critical thinking to meet complex challenges and develop new strategies for acquiring, analyzing, modeling and storing data. In this role you will guide the company into the future and utilize the latest technology and information management methodologies to meet our requirements for effective logical data modeling, metadata management and database warehouse domains. You will be working with experts in a variety of industries, including computer science and software development, as well as department heads and senior executives to integrate new technologies and refine system performance. We reward dedicated performance with exceptional pay and benefits, as well as tuition reimbursement and career growth opportunities. What You?ll Do Define data retention policies Monitor performance and advise any necessary infrastructure changes Mentor junior engineers and work with other architects to deliver best in class solutions Implement ETL / ELT process and orchestration of data flows Recommend and drive adoption of newer tools and techniques from the big data ecosystem Expertise You?ll Bring 10+ years in industry, building and managing big data systems Building, monitoring, and optimizing reliable and cost-efficient pipelines for SaaS is a must Building stream-processing systems, using solutions such as Storm or Spark-Streaming Dealing and integrating with data storage systems like SQL and NoSQL databases, file systems and object storage like s3 Reporting solutions like Pentaho, PowerBI, Looker including customizations Developing high concurrency, high performance applications that are database-intensive and have interactive, browser-based clients Working with SaaS based data management products will be an added advantage Proficiency and expertise in Cloudera / Hortonworks Spark HDF and NiFi RDBMS, NoSQL like Vertica, Redshift, Data Modelling with physical design and SQL performance optimization Messaging systems, JMS, Active MQ, Rabbit MQ, Kafka Big Data technology like Hadoop, Spark, NoSQL based data-warehousing solutions Data warehousing, reporting including customization, Hadoop, Spark, Kafka, Core java, Spring/IOC, Design patterns Big Data querying tools, such as Pig, Hive, and Impala Open-source technologies and databases (SQL & NoSQL) Proficient understanding of distributed computing principles Ability to solve any ongoing issues with operating the cluster Scale data pipelines using open-source components and AWS services Cloud (AWS), provisioning, capacity planning and performance analysis at various levels Web-based SOA architecture implementation with design pattern experience will be an added advantage Benefits Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage : group term life , personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment •We offer hybrid work options and flexible working hours to accommodate various needs and preferences. •Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. Let's unleash your full potential. See Beyond, Rise Above.
Posted 3 months ago
8 - 13 years
18 - 25 Lacs
Hyderabad
Work from Office
About Persistent We are a trusted Digital Engineering and Enterprise Modernization partner, combining deep technical expertise and industry experience to help our clients anticipate what’s next. Our offerings and proven solutions create a unique competitive advantage for our clients by giving them the power to see beyond and rise above. We work with many industry-leading organizations across the world including 12 of the 30 most innovative US companies, 80% of the largest banks in the US and India, and numerous innovators across the healthcare ecosystem. Our growth trajectory continues, as we reported $1,231M annual revenue (16% Y-o-Y). Along with our growth, we’ve onboarded over 4900 new employees in the past year, bringing our total employee count to over 23,500+ people located in 19 countries across the globe. Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. For more details please login to www.persistent.com About The Position We are looking for a Data Architect with creativity and results-oriented critical thinking to meet complex challenges and develop new strategies for acquiring, analyzing, modeling and storing data. In this role you will guide the company into the future and utilize the latest technology and information management methodologies to meet our requirements for effective logical data modeling, metadata management and database warehouse domains. You will be working with experts in a variety of industries, including computer science and software development, as well as department heads and senior executives to integrate new technologies and refine system performance. We reward dedicated performance with exceptional pay and benefits, as well as tuition reimbursement and career growth opportunities. What You?ll Do Define data retention policies Monitor performance and advise any necessary infrastructure changes Mentor junior engineers and work with other architects to deliver best in class solutions Implement ETL / ELT process and orchestration of data flows Recommend and drive adoption of newer tools and techniques from the big data ecosystem Expertise You?ll Bring 10+ years in industry, building and managing big data systems Building, monitoring, and optimizing reliable and cost-efficient pipelines for SaaS is a must Building stream-processing systems, using solutions such as Storm or Spark-Streaming Dealing and integrating with data storage systems like SQL and NoSQL databases, file systems and object storage like s3 Reporting solutions like Pentaho, PowerBI, Looker including customizations Developing high concurrency, high performance applications that are database-intensive and have interactive, browser-based clients Working with SaaS based data management products will be an added advantage Proficiency and expertise in Cloudera / Hortonworks Spark HDF and NiFi RDBMS, NoSQL like Vertica, Redshift, Data Modelling with physical design and SQL performance optimization Messaging systems, JMS, Active MQ, Rabbit MQ, Kafka Big Data technology like Hadoop, Spark, NoSQL based data-warehousing solutions Data warehousing, reporting including customization, Hadoop, Spark, Kafka, Core java, Spring/IOC, Design patterns Big Data querying tools, such as Pig, Hive, and Impala Open-source technologies and databases (SQL & NoSQL) Proficient understanding of distributed computing principles Ability to solve any ongoing issues with operating the cluster Scale data pipelines using open-source components and AWS services Cloud (AWS), provisioning, capacity planning and performance analysis at various levels Web-based SOA architecture implementation with design pattern experience will be an added advantage Benefits Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage : group term life , personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment •We offer hybrid work options and flexible working hours to accommodate various needs and preferences. •Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. Let's unleash your full potential. See Beyond, Rise Above.
Posted 3 months ago
4 - 8 years
10 - 20 Lacs
Hyderabad
Hybrid
Preferred candidate profile Mandatory 3+ hands on working experience on Apache NiFi + Apache Kafka Good to have Google Cloud Big Query Scripting Skills or Snowflake AWS or Cloud SQL Knowledge Having working experience on Scala/Java is added advantage Mandatory SQL PL/SQL Scripting Experience Mandatory anyone of Python/Linux/Unix Skills
Posted 3 months ago
2 - 7 years
4 - 9 Lacs
Chennai
Work from Office
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Google BigQuery Good to have skills : NA Minimum 2 year(s) of experience is required Educational Qualification : Must be graduate Summary :As a Data Engineer, you will be responsible for designing, developing, and maintaining data solutions for data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across systems using Google BigQuery. Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing using Google BigQuery. Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems. Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs. Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency. Implement data security and privacy measures to protect sensitive data. Professional & Technical Skills: Must To Have Skills:Proficiency in Google BigQuery. Good To Have Skills:Experience with ETL tools such as Apache NiFi or Talend. Strong understanding of data modeling, data warehousing, and data integration concepts. Experience with SQL and NoSQL databases. Familiarity with cloud computing platforms such as Google Cloud Platform or AWS. Experience with data security and privacy measures. Additional Information: The candidate should have a minimum of 2 years of experience in Google BigQuery. The ideal candidate will possess a strong educational background in computer science, information technology, or a related field, along with a proven track record of delivering impactful data-driven solutions. This position is based at our Bengaluru office. Qualifications Must be graduate
Posted 3 months ago
10 - 15 years
25 - 40 Lacs
Mumbai
Work from Office
Overview of the Company: Jio Platforms Ltd. is a revolutionary Indian multinational tech company, often referred to as India's biggest startup, headquartered in Mumbai. Launched in 2019, it's the powerhouse behind Jio, India's largest mobile network with over 400 million users. But Jio Platforms is more than just telecom. It's a comprehensive digital ecosystem, developing cutting-edge solutions across media, entertainment, and enterprise services through popular brands like JioMart, JioFiber, and JioSaavn. Join us at Jio Platforms and be part of a fast-paced, dynamic environment at the forefront of India's digital transformation. Collaborate with brilliant minds to develop next-gen solutions that empower millions and revolutionize industries. Team Overview: The Data Platforms Team is the launchpad for a data-driven future, empowering the Reliance Group of Companies. We're a passionate group of experts architecting an enterprise-scale data mesh to unlock the power of big data, generative AI, and ML modelling across various domains. We don't just manage data we transform it into intelligent actions that fuel strategic decision-making. Imagine crafting a platform that automates data flow, fuels intelligent insights, and empowers the organization that's what we do. Join our collaborative and innovative team, and be a part of shaping the future of data for India's biggest digital revolution! About the role. Title : Lead Data Engineer Location: Mumbai Responsibilities: End-to-End Data Pipeline Development: Design, build, optimize, and maintain robust data pipelines across cloud, on-premises, or hybrid environments, ensuring performance, scalability, and seamless data flow. Reusable Components & Frameworks: Develop reusable data pipeline components and contribute to the team's data pipeline framework evolution. Data Architecture & Solutions: Contribute to data architecture design, applying data modelling, storage, and retrieval expertise. Data Governance & Automation: Champion data integrity, security, and efficiency through metadata management, automation, and data governance best practices. Collaborative Problem Solving: Partner with stakeholders, data teams, and engineers to define requirements, troubleshoot, optimize, and deliver data-driven insights. Mentorship & Knowledge Transfer: Guide and mentor junior data engineers, fostering knowledge sharing and professional growth. Qualification Details: Education: Bachelor's degree or higher in Computer Science, Data Science, Engineering, or a related technical field. Core Programming: Excellent command of a primary data engineering language (Scala, Python, or Java) with a strong foundation in OOPS and functional programming concepts. Big Data Technologies: Hands-on experience with data processing frameworks (e.g., Hadoop, Spark, Apache Hive, NiFi, Ozone, Kudu), ideally including streaming technologies (Kafka, Spark Streaming, Flink, etc.). Database Expertise: Excellent querying skills (SQL) and strong understanding of relational databases (e.g., MySQL, PostgreSQL). Experience with NoSQL databases (e.g., MongoDB, Cassandra) is a plus. End-to-End Pipelines: Demonstrated experience in implementing, optimizing, and maintaining complete data pipelines, integrating varied sources and sinks including streaming real-time data. Cloud Expertise: Knowledge of Cloud Technologies like Azure HDInsights, Synapse, EventHub and GCP DataProc, Dataflow, BigQuery. CI/CD Expertise: Experience with CI/CD methodologies and tools, including strong Linux and shell scripting skills for automation. Desired Skills & Attributes: Problem-Solving & Troubleshooting: Proven ability to analyze and solve complex data problems, troubleshoot data pipeline issues effectively. Communication & Collaboration: Excellent communication skills, both written and verbal, with the ability to collaborate across teams (data scientists, engineers, stakeholders). Continuous Learning & Adaptability: A demonstrated passion for staying up-to-date with emerging data technologies and a willingness to adapt to new tools.
Posted 3 months ago
4 - 9 years
15 - 19 Lacs
Chennai, Pune, Delhi NCR
Work from Office
We are looking for "Senior NiFi ETL Developer" with Minimum 4 years experience Contact- Yashra (95001 81847) Required Candidate profile 4+ years of hands-on experience in Apache NiFi for data integration and workflow automation.
Posted 3 months ago
10 - 15 years
35 - 40 Lacs
Hyderabad
Work from Office
Responsibilities 1. Integration Strategy & Architecture Define the enterprise integration strategy , aligning with business goals and IT roadmaps. Design scalable, resilient, and secure integration architectures using industry best practices. Develop API-first and event-driven integration strategies. Establish governance frameworks, integration patterns, and best practices. 2. Technology Selection & Implementation Evaluate and recommend the right integration technologies , such as: Middleware & ESB: TIBCO, MuleSoft, WSO2, IBM Integration Bus Event Streaming & Messaging: Apache Kafka, RabbitMQ, IBM MQ API Management: Apigee, Kong, AWS API Gateway, MuleSoft ETL & Data Integration: Informatica, Talend, Apache NiFi iPaaS (Cloud Integration): Dell Boomi, Azure Logic Apps, Workato Lead the implementation and configuration of these platforms. 3. API & Microservices Architecture Design and oversee API-led integration strategies. Implement RESTful APIs, GraphQL, and gRPC for real-time and batch integrations. Define API security standards ( OAuth, JWT, OpenID Connect, API Gateway ). Establish API versioning, governance, and lifecycle management. 4. Enterprise Messaging & Event-Driven Architecture (EDA) Design real-time, event-driven architectures using: Apache Kafka for streaming and pub/sub messaging RabbitMQ, IBM MQ, TIBCO EMS for message queuing Event-driven microservices using Kafka Streams, Flink, or Spark Streaming Ensure event sourcing, CQRS, and eventual consistency in distributed systems. 5. Cloud & Hybrid Integration Develop hybrid integration strategies across on-premises, cloud, and SaaS applications . Utilize cloud-native integration tools like AWS Step Functions, Azure Event Grid, Google Cloud Pub/Sub. Integrate enterprise applications (ERP, CRM, HRMS) across SAP, Oracle, Salesforce, Workday . 6. Security & Compliance Ensure secure integration practices , including encryption, authentication, and authorization. Implement zero-trust security models for APIs and data flows. Maintain compliance with industry regulations ( GDPR, HIPAA, SOC 2 ). 7. Governance, Monitoring & Optimization Establish enterprise integration governance frameworks. Use observability tools for real-time monitoring (Datadog, Splunk, New Relic). Optimize integration performance and troubleshoot bottlenecks. 8. Leadership & Collaboration Collaborate with business and IT stakeholders to understand integration requirements. Work with DevOps and cloud teams to ensure CI/CD pipelines for integration. Provide technical guidance to developers, architects, and integration engineers. Qualifications Technical Skills Candidate should have 10+ years of experience Expertise in Integration Platforms: Informatica, TIBCO, MuleSoft, WSO2, Dell Boomi Strong understanding of API Management & Microservices Experience with Enterprise Messaging & Streaming (Kafka, RabbitMQ, IBM MQ, Azure Event Hub) Knowledge of ETL & Data Pipelines (Informatica, Talend, Apache NiFi, AWS Glue) Experience in Cloud & Hybrid Integration (AWS, Azure, GCP, OCI) Hands-on with Security & Compliance (OAuth2, JWT, SAML, API Security, Zero Trust) Soft Skills Strategic Thinking & Architecture Design Problem-solving & Troubleshooting Collaboration & Stakeholder Management Agility in Digital Transformation & Cloud Migration
Posted 3 months ago
4 - 9 years
6 - 15 Lacs
Bengaluru
Work from Office
Job Purpose and Impact As a Data Engineer at Cargill you work across the full stack to design, develop and operate high performance and data centric solutions using our comprehensive and modern data capabilities and platforms. You will play a critical role in enabling analytical insights and process efficiencies for Cargills diverse and complex business environments. You will work in a small team that shares your passion for building innovative, resilient, and high quality solutions while sharing, learning and growing together. Key Accountabilities Collaborate with business stakeholders, product owners and across your team on product or solution designs. Develop robust, scalable and sustainable data products or solutions utilizing cloud based technologies. Provide moderately complex technical support through all phases of product or solution life cycle. Perform data analysis, handle data modeling and configure and develop data pipelines to move and optimize data assets. Build moderately complex prototypes to test new concepts and provide ideas on reusable frameworks, components and data products or solutions and help promote adoption of new technologies. Independently solve moderately complex issues with minimal supervision, while escalating more complex issues to appropriate staff. Other duties as assigned Qualifications MINIMUM QUALIFICATIONS Bachelors degree in a related field or equivalent experience Minimum of two years of related work experience Other minimum qualifications may apply PREFERRED QUALIFCATIONS Experience developing modern data architectures, including data warehouses, data lakes, data meshes, hubs and associated capabilities including ingestion, governance, modeling, observability and more. Experience with data collection and ingestion capabilities, including AWS Glue, Kafka Connect and others. Experience with data storage and management of large, heterogenous datasets, including formats, structures, and cataloging with such tools as Iceberg, Parquet, Avro, ORC, S3, HFDS, HIVE, Kudu or others. Experience with transformation and modeling tools, including SQL based transformation frameworks, orchestration and quality frameworks including dbt, Apache Nifi, Talend, AWS Glue, Airflow, Dagster, Great Expectations, Oozie and others Experience working in Big Data environments including tools such as Hadoop and Spark Experience working in Cloud Platforms including AWS, GCP or Azure Experience of streaming and stream integration or middleware platforms, tools, and architectures such as Kafka, Flink, JMS, or Kinesis. Strong programming knowledge of SQL, Python, R, Java, Scala or equivalent Proficiency in engineering tooling including docker, git, and container orchestration services Strong experience of working in devops models with demonstratable understanding of associated best practices for code management, continuous integration, and deployment strategies. Experience and knowledge of data governance considerations including quality, privacy, security associated implications for data product development and consumption
Posted 3 months ago
7 - 9 years
5 - 9 Lacs
Bengaluru
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Cloudera Data Platform Good to have skills : NA Minimum 7.5 year(s) of experience is required Educational Qualification : Graduation Summary :As an Application Developer, you will be responsible for designing, building, and configuring applications to meet business process and application requirements using Cloudera Data Platform. Your typical day will involve collaborating with cross-functional teams, developing and deploying applications, and ensuring their smooth functioning. Roles & Responsibilities: Design, build, and configure applications using Cloudera Data Platform to meet business process and application requirements. Collaborate with cross-functional teams to identify and prioritize application requirements. Develop and deploy applications, ensuring their smooth functioning and adherence to quality standards. Troubleshoot and debug applications, identifying and resolving technical issues in a timely manner. Stay updated with the latest advancements in Cloudera Data Platform and related technologies, integrating innovative approaches for sustained competitive advantage. Professional & Technical Skills: Must To Have Skills:Expertise in Cloudera Data Platform. Good To Have Skills:Experience with Hadoop, Spark, and other big data technologies. Strong understanding of data engineering concepts and principles. Experience with application development using Java, Python, or other programming languages. Solid grasp of database technologies, including SQL and NoSQL databases. Experience with data integration and ETL tools such as Apache NiFi or Talend. Additional Information: The candidate should have a minimum of 7.5 years of experience in Cloudera Data Platform. The ideal candidate will possess a strong educational background in computer science, software engineering, or a related field, along with a proven track record of delivering impactful data-driven solutions. This position is based at our Bengaluru office.
Posted 3 months ago
5 - 10 years
5 - 9 Lacs
Hyderabad
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Data Warehouse ETL Testing Minimum 5 year(s) of experience is required Educational Qualification : Full Time Education Responsibilities:1. Develop and maintain Databricks notebooks for data processing and analysis.2. Design and implement scalable data pipelines using Databricks.3. Optimize and tune Databricks jobs for performance and efficiency.4. Develop and maintain ETL processes for extracting, transforming, and loading data from various sources to data warehouses.5. Collaborate with business analysts and data scientists to understand data requirements and design effective ETL solutions.6. Optimize and tune ETL processes for performance and scalability.7. Ensure security, scalability, and reliability of infrastructure through Terraform best practices.8. Design, implement, and maintain infrastructure using Terraform.9. Troubleshoot and resolve issues related to infrastructure deployments.10. Design and implement GraphQL APIs to support data queries and mutations.11. Collaborate with front-end and back-end developers to integrate GraphQL into application architectures.12. Optimize GraphQL queries for performance and efficiency.13. Ensure the security and scalability of GraphQL APIs.14. Collaborate with cross-functional teams to understand data requirements and provide technical solutions.15. Troubleshoot and resolve issues related to data processing ,integration and transformation.Technical Skills:1.Proficiency in Databricks for big data processing and analytics.2.Strong programming skills in languages such as Python, Scala, or Experience as an ETL Developer with expertise in tools such as Apache NiFi, Apache Airflow, or similar.3.Strong SQL skills for data manipulation and transformation.4.Experience working with data warehouses and data modeling.5.Familiarity with data quality and validation processes.6.SQL.7.Experience with data modeling, ETL processes, and data warehousing.8.Familiarity with cloud platforms such as AWS, Azure, or GCP.9.Proven experience with Terraform and infrastructure as code.10.Familiarity with cloud platforms such as AWS, Azure, or GCP.11.Strong scripting skills (e.g., Bash, Python) for automation tasks.12.Understanding of networking, security, and best practices in infrastructure design.13.Proven experience as a developer with a focus on GraphQL.14.Strong understanding of GraphQL concepts and best practices.15.Proficient in server-side languages such as Node.js, Python, or Java.16.Experience with GraphQL client libraries and tools. Qualification Full Time Education
Posted 3 months ago
2 - 5 years
5 - 9 Lacs
Bengaluru
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Node.js Good to have skills : NA Educational Qualification : Technical graduate with relevant experience Project Role :Application Developer Project Role Description :Design, build and configure applications to meet business process and application requirements. Must have Skills :Node.jsGood to Have Skills : Job Requirements :Key Responsibilities :Design, develop, and maintain applications using Node.js, NoSQL, Spring Boot, and Apache NiFi.Collaborate with cross-functional teams to identify and prioritize application requirements.Develop and maintain technical documentation for applications.Troubleshoot and debug applications to ensure optimal performance and functionality. Technical Experience :Proficient in Node.js. Exp.with NoSQL, Spring Boot and Apache NiFi. Proficient in React.js for building modern and interactive UI.Understanding of JavaScript, ES6+ features and best practices. Exp. in RESTful APIs and API development and design patterns. Familiar with microservices architecture principles and patterns. Exp.with database systems, MySQL, PostgreSQL, MongoDB etc. Knowledge of HTML and CSS for web development, with experience in responsive design and cross browser compatibility Professional Attributes :Minimum of 7 years of professional experience in full stack development.Excellent problem-solving skills and attention to detail.Strong communication and interpersonal skills, with the ability to work effectively in a collaborative team environment. Educational Qualification:Technical graduate with relevant experience Additional Information : Qualifications Technical graduate with relevant experience
Posted 3 months ago
3 - 5 years
8 - 18 Lacs
Pune, Bengaluru, Hyderabad
Hybrid
Marklogic Developer Mandatory Sills : Marklogic minimum3 years experience Location : Pan India ( HYBRID MODE OF WORKING ) Only Immediate Joiners Apply!! Role & responsibilities 3-5 year experience Proficient in MarkLogic, XQuery, Optic query and Semantic Technologies Hands-on experience with datahub framework. Hands-on experience with Apache Nifi. Should be comfortable using cloud platforms specifically Azure. Strong experience in designing and developing REST APIs. Knowledge of tools such as Git bash, Gradle and any one IDE. Experience in working as part of DevOps team. Good communication Perks and benefits Competitive salary and performance-based bonuses. PF, medical insurance, Statutory benefits Professional development opportunities.
Posted 3 months ago
12 - 17 years
14 - 19 Lacs
Pune, Bengaluru
Work from Office
Project Role : Application Architect Project Role Description : Provide functional and/or technical expertise to plan, analyze, define and support the delivery of future functional and technical capabilities for an application or group of applications. Assist in facilitating impact assessment efforts and in producing and reviewing estimates for client work requests. Must have skills : Manufacturing Operations Good to have skills : NA Minimum 12 year(s) of experience is required Educational Qualification : BTech BE Job Title:Industrial Data ArchitectSummary:We are seeking a highly skilled and experienced Industrial Data Architect with a proven track record in providing functional and/or technical expertise to plan, analyse, define and support the delivery of future functional and technical capabilities for an application or group of applications. Well versed with OT data quality, Data modelling, data governance, data contextualization, database design, and data warehousing.Must have Skills:Domain knowledge in areas of Manufacturing IT OT in one or more of the following verticals Automotive, Discrete Manufacturing, Consumer Packaged Goods, Life ScienceKey Responsibilities: Industrial Data Architect will be responsible for developing and overseeing the industrial data architecture strategies to support advanced data analytics, business intelligence, and machine learning initiatives. This role involves collaborating with various teams to design and implement efficient, scalable, and secure data solutions for industrial operations. Focused on designing, building, and managing the data architecture of industrial systems. Assist in facilitating impact assessment efforts and in producing and reviewing estimates for client work requests. Own the offerings and assets on key components of data supply chain, data governance, curation, data quality and master data management, data integration, data replication, data virtualization. Create scalable and secure data structures, integrating with existing systems and ensuring efficient data flow.Qualifications: Data Modeling and Architecture:oProficiency in data modeling techniques (conceptual, logical, and physical models).oKnowledge of database design principles and normalization.oExperience with data architecture frameworks and methodologies (e.g., TOGAF). Database Technologies:oRelational Databases:Expertise in SQL databases such as MySQL, PostgreSQL, Oracle, and Microsoft SQL Server.oNoSQL Databases:Experience with at least one of the NoSQL databases like MongoDB, Cassandra, and Couchbase for handling unstructured data.oGraph Databases:Proficiency with at least one of the graph databases such as Neo4j, Amazon Neptune, or ArangoDB. Understanding of graph data models, including property graphs and RDF (Resource Description Framework).oQuery Languages:Experience with at least one of the query languages like Cypher (Neo4j), SPARQL (RDF), or Gremlin (Apache TinkerPop). Familiarity with ontologies, RDF Schema, and OWL (Web Ontology Language). Exposure to semantic web technologies and standards. Data Integration and ETL (Extract, Transform, Load):oProficiency in ETL tools and processes (e.g., Talend, Informatica, Apache NiFi).oExperience with data integration tools and techniques to consolidate data from various sources. IoT and Industrial Data Systems:oFamiliarity with Industrial Internet of Things (IIoT) platforms and protocols (e.g., MQTT, OPC UA).oExperience with either of IoT data platforms like AWS IoT, Azure IoT Hub, and Google Cloud IoT Core.oExperience working with one or more of Streaming data platforms like Apache Kafka, Amazon Kinesis, Apache FlinkoAbility to design and implement real-time data pipelines. Familiarity with processing frameworks such as Apache Storm, Spark Streaming, or Google Cloud Dataflow.oUnderstanding of event-driven design patterns and practices. Experience with message brokers like RabbitMQ or ActiveMQ.oExposure to the edge computing platforms like AWS IoT Greengrass or Azure IoT Edge AI/ML, GenAI:oExperience working on data readiness for feeding into AI/ML/GenAI applicationsoExposure to machine learning frameworks such as TensorFlow, PyTorch, or Keras. Cloud Platforms:oExperience with cloud data services from at least one of the providers like AWS (Amazon Redshift, AWS Glue), Microsoft Azure (Azure SQL Database, Azure Data Factory), and Google Cloud Platform (BigQuery, Dataflow). Data Warehousing and BI Tools:oExpertise in data warehousing solutions (e.g., Snowflake, Amazon Redshift, Google BigQuery).oProficiency with Business Intelligence (BI) tools such as Tableau, Power BI, and QlikView. Data Governance and Security:oUnderstanding of data governance principles, data quality management, and metadata management.oKnowledge of data security best practices, compliance standards (e.g., GDPR, HIPAA), and data masking techniques. Big Data Technologies:oExperience in big data platforms and tools such as Hadoop, Spark, and Apache Kafka.oUnderstanding of distributed computing and data processing frameworks. Excellent Communication:Superior written and verbal communication skills, with the ability to effectively articulate complex technical concepts to diverse audiences. Problem-Solving Acumen:A passion for tackling intricate challenges and devising elegant solutions. Collaborative Spirit:A track record of successful collaboration with cross-functional teams and stakeholders. Certifications:AWS Certified Data Engineer Associate / Microsoft Certified:Azure Data Engineer Associate / Google Cloud Certified Professional Data Engineer certification is mandatory Minimum of 14-18 years progressive information technology experience. Qualifications BTech BE
Posted 3 months ago
10 - 12 years
18 - 20 Lacs
Bengaluru
Hybrid
Preferred candidate profile 10+ years in with 5+ years of relevant experience in reputed IT or Digital Companies. Proven experience in designing and developing scalable solutions with data processing tools (Example - Apache Spark, NIFI, Airflow, Python, Scala, etc). Hands on experience in Python. Exposure of different packages of python used for data engineering. Experience in Object storage like MinIO/Ceph, Elastic Search , Postgres , Grafana, Apache Superset. Maintain the data pipelines and perform data normalization and transformations Knowledge of Scala and Spark - desirable Experience in Containerization Docker, Kubernetes. Good understanding of different data storage, database and orchestration tools. Experience in designing highly scalable distributed applications and addressing scaling, and performance problems. Good exposure working in Azure Cloud. Work closely with Data Scientist and ML Engineers focused on delivering high-quality data and analytic applications on the cloud. Experience in designing relational and non-relational databases PostgreSQL/Cassandra. Develop and enhance new & existing pipelines to meet ongoing business requirements while keeping performance, reliability, efficiency, and security in mind. Understanding of ELT and ML data pipelines. Exposure working in global projects and product-based environment with stakeholder interaction experience. Looking for women workforce for this role and candidates with shot notice preferred.
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2