Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 7.0 years
5 - 10 Lacs
Bengaluru
Work from Office
1 Have a good understanding of AWS services specifically in the following areas RDS, S3, add EC2, VPC, KMS, ECS, Lambda, AWS Organizations and IAM policy setup. Also Python as a main skill. 2 Architect/design/code database infrastructure deployment using terraform. Should be able to write terraform modules that will deploy database services in AWS 3 Provide automation solutions using python lambda's for repetitive tasks such as running quarterly audits, daily health checks in RDS in multiple accounts. 4 Have a fair understanding of Ansible to automate Postgres infrastructure deployment and automation of repetitive tasks for on prem servers 5 Knowledge of Postgres and plpgsql functions6 Hands on experience with Ansible and Terraform and the ability to contribute to ongoing projects with minimal coaching.
Posted 2 weeks ago
7.0 - 12.0 years
11 - 16 Lacs
Bengaluru
Work from Office
As a Fortune 50 company with more than 400,000 team members worldwide, Target is an iconic brand and one of America's leading retailers. Joining Target means promoting a culture of mutual care and respect and striving to make the most meaningful and positive impact. Becoming a Target team member means joining a community that values different voices and lifts each other up . Here, we believe your unique perspective is important, and you'll build relationships by being authentic and respectful. Overview about TII: At Target, we have a timeless purpose and a proven strategy. And that hasnt happened by accident. Some of the best minds from different backgrounds come together at Target to redefine retail in an inclusive learning environment that values people and delivers world-class outcomes. That winning formula is especially apparent in Bengaluru, where Target in India operates as a fully integrated part of Targets global team and has more than 4,000 team members supporting the companys global strategy and operations. Target as a tech companyAbsolutely. Were the behind-the-scenes powerhouse that fuels Targets passion and commitment to cutting-edge innovation. We anchor every facet of one of the worlds best-loved retailers with a strong technology framework that relies on the latest tools and technologiesand the brightest peopleto deliver incredible value to guests online and in stores. Target Technology Services is on a mission to offer the systems, tools and support that guests and team members need and deserve. Our high-performing teams balance independence with collaboration, and we pride ourselves on being versatile, agile and creative. We drive industry-leading technologies in support of every angle of the business, and help ensure that Target operates smoothly, securely and reliably from the inside out. Role overview As a Lead Engineer, you serve as the technical anchor for the engineering team that supports a product. You create, own and are responsible for the application architecture that best serves the product in its functional and non-functional needs. You identify and drive architectural changes to accelerate feature development or improve the quality of service (or both). You have deep and broad engineering skills and are capable of standing up an architecture in its whole on your own, but you choose to influence a wider team by acting as a force multiplier. Core responsibilities of this job are described within this job description. Job duties may change at any time due to business needs. Use your skills, experience and talents to be a part of groundbreaking thinking and visionary goals. As a Lead Engineer, youll take the lead as youUse your technology acumen to apply and maintain knowledge of current and emerging technologies within specialized area(s) of the technology domain. Evaluate new technologies and participates in decision-making, accounting for several factors such as viability within Targets technical environment, maintainability, and cost of ownership. Initiate and execute research and proof-of-concept activities for new technologies. Lead or set strategy for testing and debugging at the platform or enterprise level. In complex and unstructured situations, serve as an expert resource to create and improve standards and best practices to ensure high-performance, scalable, repeatable, and secure deliverables. Lead the design, lifecycle management, and total cost of ownership of services. Provide the team with thought leadership to promote re-use and develop consistent, scalable patterns. Participate in planning services that have enterprise impact. Provide suggestions for handling routine and moderately complex technical problems, escalating issues when appropriate. Gather information, data, and input from a wide variety of sources; identify additional resources when appropriate, engage with appropriate stakeholders, and conduct in-depth analysis of information. Provide suggestions for handling routine and moderately complex technical problems, escalating issues when appropriate. Develop plans and schedules, estimate resource requirements, and define milestones and deliverables. Monitor workflow and risks; play a leadership role in mitigating risks and removing obstacles. Lead and participate in complex construction, automation, and implementation activities, ensuring successful implementation with architectural and operational requirements met. Establish new standards and best practices to monitor, test, automate, and maintain IT components or systems. Serve as an expert resource in disaster recovery and disaster recovery planning. Stay current with Targets technical capabilities, infrastructure, and technical environment. Develop fully attributed data models, including logical, physical, and canonical. Influence data standards, policies, and procedures. Install, configure, and/or tune data management solutions with minimal guidance. Monitor data management solution(s) and identify optimization opportunities About you: Bachelor's degree (or equivalent experience) in Computer Science, Engineering, or related field. 7+ years of hands-on software development experience, including at least one full-cycle project implementation. Expertise in Targets technology landscape, with a solid understanding of industry trends, competitors products, and differentiating features. Proficient in Kotlin with advanced knowledge of Microservices architecture and Event-driven architectures . Strong experience with high-priority, large-scale applications capable of processing millions of records. Proven ability to design and implement highly scalable and observable systems . Working on mission-critical applications with large transaction volumes and high throughput. Building systems that are scalable , with a focus on performance and resilience. Leveraging cutting-edge tools for data correlation and pattern analysis. Experience with Scala , Hadoop , and other Big Data technologies is preferred Strong retail domain knowledge with experience working on multi-channel platforms. Hands-on experience with high-performance messaging platforms that are highly scalable. Useful Links: Life at Targethttps://india.target.com/ Benefitshttps://india.target.com/life-at-target/workplace/benefits Culture https://india.target.com/life-at-target/belonging
Posted 2 weeks ago
8.0 - 13.0 years
5 - 9 Lacs
Mumbai
Work from Office
Project Role : Application Developer Project Role Description : Design
Posted 2 weeks ago
5.0 - 10.0 years
25 - 30 Lacs
Bengaluru
Work from Office
Role & Apache Nifi 5+ years of hands-on experience with Apache NiFi, including developing, managing, and optimizing complex data flows in production environments. Proven experience with Cloudera NiFi (CDP Data Flow) in enterprise environments, including integration with Cloudera Manager. Experience migrating NiFi flows across major version upgrades with strong understanding of backward compatibility Strong proficiency in Groovy scripting, used for ExecuteScript and InvokeScriptedProcessor processors. Solid understanding of SSH and SFTP protocols, including authentication schemes (key-based, password), session negotiation, and file permissions handling in NiFi processors (e.g., ListSFTP, FetchSFTP, PutSFTP). Good grasp of data encryption mechanisms, key management, and secure flowfile handling using processors like EncryptContent. Experience integrating NiFi with MongoDB, including reading/writing documents via processors like GetMongo, PutMongo, and QueryMongo. Experience working with Apache Kafka, including producing and consuming from Kafka topics using NiFi (PublishKafka, ConsumeKafka), and handling schema evolution with Confluent Schema Registry. Strong knowledge of Red Hat Enterprise Linux (RHEL) environments, including systemd services, filesystem permissions, log rotation, and resource tuning for JVM-based applications like NiFi. NiFi-Specific Technical Requirements: In-depth knowledge of NiFi flow design principles, including proper use of queues, back pressure, prioritizers, and connection tuning. Mastery of controller services, including SSLContextService, DBCPConnectionPool, and RecordReader/RecordWriter services. Experience with Record-based processing using Avro, JSON, CSV schemas and Record processors like ConvertRecord, QueryRecord, and LookupRecord. Ability to debug and optimize NiFi flows using Data Provenance, bulletins, and log analysis. Familiarity with custom processor development in Java/Groovy (optional but preferred). Experience setting up secure NiFi clusters, configuring user authentication (LDAP, OIDC), TLS certificates, and access policies. Proficiency in parameter contexts, variable registry, and flow versioning using NiFi Registry. Understanding of Zero-Master clustering model, node coordination, and site-to-site protocol. Experience deploying and monitoring NiFi in high-availability, production-grade environments, including using Prometheus/Grafana or Cloudera Manager for metrics and alerting. Preferred Qualifications: Experience working in regulated or secure environments, with strict data handling and audit requirements. Familiarity with DevOps workflows, including version-controlled flow templates (JSON/XML), CI/CD integration for NiFi Registry, and automated deployment strategies. Strong written and verbal communication skills, with ability to document flows and onboard other engineers. responsibilities Preferred candidate profile
Posted 2 weeks ago
7.0 - 12.0 years
11 - 15 Lacs
Gurugram
Work from Office
Project description We are looking for an experienced Data Engineer to contribute to the design, development, and maintenance of our database systems. This role will work closely with our software development and IT teams to ensure the effective implementation and management of database solutions that align with client's business objectives. Responsibilities The successful candidate would be responsible for managing technology in projects and providing technical guidance/solutions for work completion (1.) To be responsible for providing technical guidance/solutions (2.) To ensure process compliance in the assigned module and participate in technical discussions/reviews (3.) To prepare and submit status reports for minimizing exposure and risks on the project or closure of escalations (4.) Being self-organized, focused on develop on time and quality software Skills Must have At least 7 years of experience in development in Data specific projects. Must have working knowledge of streaming data Kafka Framework (kSQL/Mirror Maker etc) Strong programming skills in at least one of these programming language Groovy/Java Good knowledge of Data Structure, ETL Design, and storage. Must have worked in streaming data environments and pipelines Experience working in near real-time/Streaming Data pipeline development using Apache Spark/Streamsets/ Apache NIFI or similar frameworks Nice to have N/A
Posted 3 weeks ago
15.0 - 20.0 years
30 - 35 Lacs
Bengaluru
Work from Office
Business Area: Support Seniority Level: Mid-Senior level Job Description: At Cloudera, we empower people to transform complex data into clear and actionable insights. With as much data under management as the hyperscalers, were the preferred data partner for the top companies in almost every industry. Powered by the relentless innovation of the open source community, Cloudera advances digital transformation for the world s largest enterprises. As a Technical Support Account Manager (TSAM) at Cloudera, you will play a pivotal role in ensuring customer success post-sale. This is a technical, customer-facing role responsible for guiding customers through the installation, implementation, and maintenance of Cloudera solutions. You will work closely with customers to ensure product adoption, optimize performance, and drive value realization. As a Technical Support Account Manager you will.... Provide post-sales technical expertise to support customers in deploying and maintaining Cloudera solutions. Serve as a trusted advisor, understanding customer environments and proactively addressing technical challenges. Ensure Cloudera s platform is fully functioning according to specifications and aligned with customer use cases. Lead meetings and provide status updates for identified issues (case trends, performance challenges, cost optimization, user/account health concerns, etc.) Collaborate with internal teams to resolve escalations, monitor performance trends, and optimize customer experience. Deliver technical guidance and best practices, facilitating smooth product adoption and ongoing success. Act as a liaison between customers and Cloudera s sales, support, and engineering teams to ensure continuous improvement and alignment. We are excited if you have... Strong technical acumen with experience in data warehousing, & cloud computing or enterprise software solutions. Proven ability to engage with customers, understand their business needs, and drive technical outcomes. Experience in post-sales support, implementation, or technical account management. Excellent problem-solving, communication, and stakeholder management skills. Passion for delivering an exceptional customer experience and driving product success. This is an individual contributor role with no sales incentives, focused purely on customer enablement and technical success. If you thrive in a dynamic, customer-centric environment and want to be part of a team that shapes the future of Cloudera, we d love to hear from you! What you can expect from us: Generous PTO Policy Support work life balance with Unplugged Days Flexible WFH Policy Mental & Physical Wellness programs Phone and Internet Reimbursement program Access to Continued Career Development Comprehensive Benefits and Competitive Packages Paid Volunteer Time Employee Resource Groups EEO/VEVRAA #LI-AB1 #LI-HYBRID #LI-REMOTE
Posted 3 weeks ago
7.0 - 9.0 years
11 - 15 Lacs
Hyderabad
Work from Office
Responsibilities: Design and implement Cloudera-based data platforms, including cluster sizing, configuration, and optimization. Install, configure, and administer Cloudera Manager and CDP clusters, managing all aspects of the cluster lifecycle. Monitor and troubleshoot platform performance, identifying and resolving issues in a timely manner. Review the maintain the data ingestion and processing pipelines on the Cloudera platform. Collaborate with data engineers and data scientists to design and optimize data models, ensuring efficient data storage and retrieval. Implement and enforce security measures for the Cloudera platform, including authentication, authorization, and encryption. Manage platform user access and permissions, ensuring compliance with data privacy regulations and internal policies. Experience in creating Technology Road Maps for Cloudera Platform. Stay up-to-date with the latest Cloudera and big data technologies, and recommend and implement relevant updates and enhancements to the platform. Experience in Planning, testing, and executing upgrades involving Cloudera components and ensuring platform stability and security. Document platform configurations, processes, and procedures, and provide training and support to other team members as needed. Requirements: Proven experience as a Cloudera platform engineer or similar role, with a strong understanding of Cloudera Manager and CDH clusters. Expertise in designing, implementing, and maintaining scalable and high-performance data platforms using Cloudera technologies such as Hadoop, Spark, Hive, Kafka. Strong knowledge of big data concepts and technologies, data modeling, and data warehousing principles. Familiarity with data security and compliance requirements, and experience implementing security measures for Cloudera platforms. Proficiency in Linux system administration and scripting languages (e.g., Shell, Python). Strong troubleshooting and problem-solving skills, with the ability to diagnose and resolve platform issues quickly. Excellent communication and collaboration skills, with the ability to work effectively in cross-functional teams. Experience on Azure Data Factory/Azure Databricks/Azure Synapse is a plus
Posted 3 weeks ago
10.0 - 14.0 years
35 - 40 Lacs
Hyderabad
Work from Office
Skills: Cloudera, Big Data, Hadoop, SPARK, Kafka, Hive, CDH Clusters Design and implement Cloudera-based dataplatforms, including cluster sizing, configuration, and optimization. Install, configure, and administer Cloudera Manager and CDP clusters, managingall aspects of the cluster lifecycle. Monitor and troubleshoot platform performance, identifying and resolving issuespromptly. Review the maintain the data ingestion and processing pipelines on the Clouderaplatform. Collaborate with data engineers and datascientists to design and optimize data models, ensuring efficient data storageand retrieval. Implement and enforce security measures for the Cloudera platform, including authentication, authorization, and encryption. Manage platform user access and permissions, ensuring compliance with dataprivacy regulations and internal policies. Experience in creating Technology Road Mapsfor Cloudera Platform. Stay up-to-date with the latest Cloudera and big datatechnologies, and recommend and implement relevant updates and enhancements tothe platform. Experience in Planning, testing, andexecuting upgrades involving Cloudera components and ensuring platformstability and security. Document platformconfigurations, processes, and procedures, and provide training and support toother team members as needed. Requirements Bachelor's degree in Computer Science, Engineering, or a related field. Proven experience as a Cloudera platform engineer or similar role, with astrong understanding of Cloudera Manager and CDH clusters. Expertise in designing, implementing, and maintaining scalable andhigh-performance data platforms using Cloudera technologies such as Hadoop, Spark, Hive, Kafka. Strong knowledge of big data concepts and technologies, data modeling, and data warehousing principles. Familiarity with data security and compliance requirements, and experience implementing security measures for Cloudera platforms. Proficiency in Linux system administration and scripting languages (e.g.,Shell, Python). Strong troubleshooting and problem-solving skills, with the ability to diagnoseand resolve platform issues quickly. Excellent communication and collaboration skills, with the ability to workeffectively in cross-functional teams. Experience on Azure Data Factory/Azure Databricks/ Azure Synapse is a plus.
Posted 3 weeks ago
3.0 - 8.0 years
5 - 8 Lacs
Mumbai
Work from Office
Role Overview: Seeking an experienced Apache Airflow specialist to design and manage data orchestration pipelines for batch/streaming workflows in a Cloudera environment. Key Responsibilities: * Design, schedule, and monitor DAGs for ETL/ELT pipelines * Integrate Airflow with Cloudera services and external APIs * Implement retries, alerts, logging, and failure recovery * Collaborate with data engineers and DevOps teams Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Skills Required: * Experience3-8 years * Expertise in Airflow 2.x, Python, Bash * Knowledge of CI/CD for Airflow DAGs * Proven experience with Cloudera CDP, Spark/Hive-based data pipelines * Integration with Kafka, REST APIs, databases
Posted 3 weeks ago
3.0 - 5.0 years
12 - 16 Lacs
Kochi
Work from Office
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 3-5+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala ; Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers
Posted 3 weeks ago
4.0 - 9.0 years
12 - 16 Lacs
Kochi
Work from Office
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala ; Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers
Posted 3 weeks ago
8.0 - 13.0 years
10 - 14 Lacs
Bengaluru
Work from Office
Educational Bachelor of Engineering Service Line Strategic Technology Group Responsibilities Power Programmer is an important initiative within Global Delivery to develop a team of Full Stack Developers who will be working on complex engineering projects, platforms and marketplaces for our clients using emerging technologies., They will be ahead of the technology curve and will be constantly enabled and trained to be Polyglots., They are Go-Getters with a drive to solve end customer challenges and will spend most of their time in designing and coding, End to End contribution to technology oriented development projects., Providing solutions with minimum system requirements and in Agile Mode., Collaborate with Power Programmers., Open Source community and Tech User group., Custom Development of new Platforms & Solutions ,Opportunities., Work on Large Scale Digital Platforms and marketplaces., Work on Complex Engineering Projects using cloud native architecture ., Work with innovative Fortune 500 companies in cutting edge technologies., Co create and develop New Products and Platforms for our clients., Contribute to Open Source and continuously upskill in latest technology areas., Incubating tech user group Technical and Professional : Bigdata Spark, scala, hive, kafka Preferred Skills: Technology-Big Data-Hbase Technology-Big Data-Sqoop Technology-Java-Apache-Scala Technology-Functional Programming-Scala Technology-Big Data - Data Processing-Map Reduce Technology-Big Data - Data Processing-Spark
Posted 3 weeks ago
4.0 - 9.0 years
10 - 12 Lacs
Bengaluru, Doddakannell, Karnataka
Work from Office
We are seeking a highly skilled Data Engineer with expertise in ETL techniques, programming, and big data technologies. The candidate will play a critical role in designing, developing, and maintaining robust data pipelines, ensuring data accuracy, consistency, and accessibility. This role involves collaboration with cross-functional teams to enrich and maintain a central data repository for advanced analytics and machine learning. The ideal candidate should have experience with cloud-based data platforms, data modeling, and data governance processes. Location - Bengaluru,Doddakannell, Karnataka, Sarjapur Road
Posted 3 weeks ago
3.0 - 6.0 years
25 - 30 Lacs
Chennai
Work from Office
Zalaris is looking for Senior Data Engineer to join our dynamic team and embark on a rewarding career journey Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs.
Posted 3 weeks ago
3.0 - 6.0 years
25 - 30 Lacs
Pune
Work from Office
Diverse Lynx is looking for Data Engineer to join our dynamic team and embark on a rewarding career journey Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs.
Posted 3 weeks ago
3.0 - 6.0 years
8 - 17 Lacs
Chennai, Bengaluru, Mumbai (All Areas)
Work from Office
Role & responsibilities Duration : 6 Months + Extendable Job Locations: Any Protiviti Preferred/mandatory Skills: Role Focus: Support UAT and data validation during migration from legacy Cloudera to a big modern on-prem data platform. Core Tasks: Execute UAT, validate data pipelines (Hive, Impala, Spark, CDSW), perform quality checks, write SQL queries, and document test cases. Must-Have Skills: 3+ years in big data UAT/QA, strong SQL, experience with Cloudera tools, data validation, and platform migration exposure. Nice to Have: PySpark, Jupyter, data governance knowledge, telecom, or large enterprise experience. Preferred candidate profile
Posted 3 weeks ago
0.0 - 3.0 years
4 - 5 Lacs
Pune
Work from Office
Job Description: Essential Job Functions: Contribute to data engineering tasks and projects, including data processing and data integration. Support data pipeline development and maintenance. Collaborate with colleagues to meet data requirements and ensure data quality. Assist in data analysis for insights and reporting. Follow data engineering standards and best practices. Pursue opportunities for continuous learning and growth in the data engineering domain. Learn from experienced data engineers and analysts within the team. Use data engineering tools and techniques to accomplish tasks. Basic Qualifications: Bachelors degree in a relevant field or equivalent combination of education and experience Typically, 2+ years of relevant work experience Proven experience in data engineering Proficiencies in data engineering tools and technologies A continuous learner that stays abreast with industry knowledge and technology Other Qualifications: Advanced degree in a relevant field a plus Relevant certifications, such as Google Cloud Professional Data Engineer or Cloudera Certified Data Analyst a plus At DXC Technology, we believe strong connections and community are key to our success. Our work model prioritizes in-person collaboration while offering flexibility to support wellbeing, productivity, individual work styles, and life circumstances. We re committed to fostering an inclusive environment where everyone can thrive. Recruitment fraud is a scheme in which fictitious job opportunities are offered to job seekers typically through online services, such as false websites, or through unsolicited emails claiming to be from the company. These emails may request recipients to provide personal information or to make payments as part of their illegitimate recruiting process. DXC does not make offers of employment via social media networks and DXC never asks for any money or payments from applicants at any point in the recruitment process, nor ask a job seeker to purchase IT or other equipment on our behalf. More information on employment scams is available here .
Posted 3 weeks ago
5.0 - 10.0 years
10 - 20 Lacs
Bengaluru
Work from Office
Apache Nifi 5+ years of hands-on experience with Apache NiFi, including developing, managing, and optimizing complex data flows in production environments. Proven experience with Cloudera NiFi (CDP Data Flow) in enterprise environments, including integration with Cloudera Manager. Experience migrating NiFi flows across major version upgrades with strong understanding of backward compatibility Strong proficiency in Groovy scripting, used for ExecuteScript and InvokeScriptedProcessor processors. Solid understanding of SSH and SFTP protocols, including authentication schemes (key-based, password), session negotiation, and file permissions handling in NiFi processors (e.g., ListSFTP, FetchSFTP, PutSFTP). Good grasp of data encryption mechanisms, key management, and secure flowfile handling using processors like EncryptContent. Experience integrating NiFi with MongoDB, including reading/writing documents via processors like GetMongo, PutMongo, and QueryMongo. Experience working with Apache Kafka, including producing and consuming from Kafka topics using NiFi (PublishKafka, ConsumeKafka), and handling schema evolution with Confluent Schema Registry. Strong knowledge of Red Hat Enterprise Linux (RHEL) environments, including systemd services, filesystem permissions, log rotation, and resource tuning for JVM-based applications like NiFi. NiFi-Specific Technical Requirements: In-depth knowledge of NiFi flow design principles, including proper use of queues, back pressure, prioritizers, and connection tuning. Mastery of controller services, including SSLContextService, DBCPConnectionPool, and RecordReader/RecordWriter services. Experience with Record-based processing using Avro, JSON, CSV schemas and Record processors like ConvertRecord, QueryRecord, and LookupRecord. Ability to debug and optimize NiFi flows using Data Provenance, bulletins, and log analysis. Familiarity with custom processor development in Java/Groovy (optional but preferred). Experience setting up secure NiFi clusters, configuring user authentication (LDAP, OIDC), TLS certificates, and access policies. Proficiency in parameter contexts, variable registry, and flow versioning using NiFi Registry. Understanding of Zero-Master clustering model, node coordination, and site-to-site protocol. Experience deploying and monitoring NiFi in high-availability, production-grade environments, including using Prometheus/Grafana or Cloudera Manager for metrics and alerting. Preferred Qualifications: Experience working in regulated or secure environments, with strict data handling and audit requirements. Familiarity with DevOps workflows, including version-controlled flow templates (JSON/XML), CI/CD integration for NiFi Registry, and automated deployment strategies. Strong written and verbal communication skills, with ability to document flows and onboard other engineers.
Posted 3 weeks ago
5.0 - 8.0 years
6 - 11 Lacs
Navi Mumbai
Work from Office
Skill required: Network Billing Operations - Problem Management Designation: Network & Svcs Operation Senior Analyst Qualifications: Any Graduation Years of Experience: 5 to 8 years About Accenture Combining unmatched experience and specialized skills across more than 40 industries, we offer Strategy and Consulting, Technology and Operations services, and Accenture Song all powered by the worlds largest network of Advanced Technology and Intelligent Operations centers. Our 699,000 people deliver on the promise of technology and human ingenuity every day, serving clients in more than 120 countries. Visit us at www.accenture.com What would you do Helps transform back office and network operations, reduce time to market and grow revenue, by improving customer experience and capex efficiency, and reducing cost-to-serveGood Customer Support Experience preferred with good networking knowledgeManage problems caused by information technology infrastructure errors to minimize their adverse impact on business and to prevent their recurrence by seeking the root cause of those incidents and initiating actions to improve or correct the situation. What are we looking for 5 years of programming skills- advanced level in relation to responsibility for maintenance of existing & creation of new queries via SQL scripts, Python, PySpark programming skills, experience with Databricks, Palantir is advantage Other skillsMust be self-motivated and understand short turnaround expectations Desire to learn and understand data models and billing processes Critical thinking Experience with reporting and metrics- strong numerical skills Experience in expense, billing, or financial management Experience in process/system management Good organizational skills, self-disciplined, systematic approach with good interpersonal skills Flexible, Analytical mind, Problem solver Knowledge of Telecom Products and Services Roles and Responsibilities: In this role you are required to do analysis and solving of increasingly complex problems Your day to day interactions are with peers within Accenture You are likely to have some interaction with clients and/or Accenture management You will be given minimal instruction on daily work/tasks and a moderate level of instruction on new assignments Decisions that are made by you impact your own work and may impact the work of others In this role you would be an individual contributor and/or oversee a small work effort and/or team Please note that this role may require you to work in rotational shifts Qualification Any Graduation
Posted 3 weeks ago
3.0 - 5.0 years
5 - 9 Lacs
Navi Mumbai
Work from Office
Skill required: Network Billing Operations - Problem Management Designation: Network & Svcs Operation Analyst Qualifications: Any Graduation Years of Experience: 3 to 5 years About Accenture Combining unmatched experience and specialized skills across more than 40 industries, we offer Strategy and Consulting, Technology and Operations services, and Accenture Song all powered by the worlds largest network of Advanced Technology and Intelligent Operations centers. Our 699,000 people deliver on the promise of technology and human ingenuity every day, serving clients in more than 120 countries. Visit us at www.accenture.com What would you do Helps transform back office and network operations, reduce time to market and grow revenue, by improving customer experience and capex efficiency, and reducing cost-to-serveGood Customer Support Experience preferred with good networking knowledgeManage problems caused by information technology infrastructure errors to minimize their adverse impact on business and to prevent their recurrence by seeking the root cause of those incidents and initiating actions to improve or correct the situation. What are we looking for 5 years of programming skills- advanced level in relation to responsibility for maintenance of existing & creation of new queries via SQL scripts, Python, PySpark programming skills, experience with Databricks, Palantir is advantage Other skillsMust be self-motivated and understand short turnaround expectations Desire to learn and understand data models and billing processes Critical thinking Experience with reporting and metrics- strong numerical skills Experience in expense, billing, or financial management Experience in process/system management Good organizational skills, self-disciplined, systematic approach with good interpersonal skills Flexible, Analytical mind, Problem solver Knowledge of Telecom Products and Services Roles and Responsibilities: In this role you are required to do analysis and solving of lower-complexity problems Your day to day interaction is with peers within Accenture before updating supervisors In this role you may have limited exposure with clients and/or Accenture management You will be given moderate level instruction on daily work tasks and detailed instructions on new assignments The decisions you make impact your own work and may impact the work of others You will be an individual contributor as a part of a team, with a focused scope of work Please note that this role may require you to work in rotational shifts Qualification Any Graduation
Posted 3 weeks ago
4.0 - 9.0 years
6 - 11 Lacs
Kochi
Work from Office
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala ; Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / Data Bricks, AWS RedShift, DynamoDB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers
Posted 4 weeks ago
5.0 - 10.0 years
22 - 27 Lacs
Bengaluru
Work from Office
Create Solution Outline and Macro Design to describe end to end product implementation in Data Platforms including, System integration, Data ingestion, Data processing, Serving layer, Design Patterns, Platform Architecture Principles for Data platform Contribute to pre-sales, sales support through RfP responses, Solution Architecture, Planning and Estimation Contribute to reusable components / asset / accelerator development to support capability development Participate in Customer presentations as Platform Architects / Subject Matter Experts on Big Data, Azure Cloud and related technologies Participate in customer PoCs to deliver the outcomes Participate in delivery reviews / product reviews, quality assurance and work as design authority Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience in designing of data products providing descriptive, prescriptive, and predictive analytics to end users or other systems Experience in data engineering and architecting data platforms Experience in architecting and implementing Data Platforms Azure Cloud Platform Experience on Azure cloud is mandatory (ADLS Gen 1 / Gen2, Data Factory, Databricks, Synapse Analytics, Azure SQL, Cosmos DB, Event hub, Snowflake), Azure Purview, Microsoft Fabric, Kubernetes, Terraform, Airflow Experience in Big Data stack (Hadoop ecosystem Hive, HBase, Kafka, Spark, Scala PySpark, Python etc.) with Cloudera or Hortonworks Preferred technical and professional experience Experience in architecting complex data platforms on Azure Cloud Platform and On-Prem Experience and exposure to implementation of Data Fabric and Data Mesh concepts and solutions like Microsoft Fabric or Starburst or Denodo or IBM Data Virtualisation or Talend or Tibco Data Fabric Exposure to Data Cataloging and Governance solutions like Collibra, Alation, Watson Knowledge Catalog, dataBricks unity Catalog, Apache Atlas, Snowflake Data Glossary etc
Posted 4 weeks ago
4.0 - 8.0 years
6 - 10 Lacs
Mumbai
Work from Office
Role Overview: Lead the architectural design and implementation of a secure, scalable Cloudera-based Data Lakehouse for one of India’s top public sector banks. Key Responsibilities: * Design end-to-end Lakehouse architecture on Cloudera * Define data ingestion, processing, storage, and consumption layers * Guide data modeling, governance, lineage, and security best practices * Define migration roadmap from existing DWH to CDP * Lead reviews with client stakeholders and engineering teams Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Proven experience with Cloudera CDP, Spark, Hive, HDFS, Iceberg * Deep understanding of Lakehouse patterns and data mesh principles * Familiarity with data governance tools (e.g., Apache Atlas, Collibra) * Banking/FSI domain knowledge highly desirable.
Posted 4 weeks ago
6.0 - 11.0 years
8 - 15 Lacs
Noida
Work from Office
We are hiring for the position "Hadoop Admin" Skill Set: Hadoop, Cloudera, big data, spark, Hive, HDFS, YARN, HIVE, KAFKA, SPARK, SQL DATABASE, RANGER Experience: 7 years Location: Noida, Sector-135 Work Mode: Work from Office Budget: 14-15 LPA
Posted 4 weeks ago
10.0 - 15.0 years
35 - 50 Lacs
Mumbai
Work from Office
Overview of the Company: Jio Platforms Ltd. is a revolutionary Indian multinational tech company, often referred to as India's biggest startup, headquartered in Mumbai. Launched in 2019, it's the powerhouse behind Jio, India's largest mobile network with over 400 million users. But Jio Platforms is more than just telecom. It's a comprehensive digital ecosystem, developing cutting-edge solutions across media, entertainment, and enterprise services through popular brands like JioMart, JioFiber, and JioSaavn. Join us at Jio Platforms and be part of a fast-paced, dynamic environment at the forefront of India's digital transformation. Collaborate with brilliant minds to develop next-gen solutions that empower millions and revolutionize industries. Team Overview: The Data Platforms Team is the launchpad for a data-driven future, empowering the Reliance Group of Companies. We're a passionate group of experts architecting an enterprise-scale data mesh to unlock the power of big data, generative AI, and ML modelling across various domains. We don't just manage data we transform it into intelligent actions that fuel strategic decision-making. Imagine crafting a platform that automates data flow, fuels intelligent insights, and empowers the organization that's what we do. Join our collaborative and innovative team, and be a part of shaping the future of data for India's biggest digital revolution! About the role. Title : Lead Data Engineer Location : Mumbai Responsibilities: End-to-End Data Pipeline Development: Design, build, optimize, and maintain robust data pipelines across cloud, on-premises, or hybrid environments, ensuring performance, scalability, and seamless data flow. Reusable Components & Frameworks: Develop reusable data pipeline components and contribute to the team's data pipeline framework evolution. Data Architecture & Solutions: Contribute to data architecture design, applying data modelling, storage, and retrieval expertise. Data Governance & Automation: Champion data integrity, security, and efficiency through metadata management, automation, and data governance best practices. Collaborative Problem Solving: Partner with stakeholders, data teams, and engineers to define requirements, troubleshoot, optimize, and deliver data-driven insights. Mentorship & Knowledge Transfer: Guide and mentor junior data engineers, fostering knowledge sharing and professional growth. Qualification Details: Education: Bachelor's degree or higher in Computer Science, Data Science, Engineering, or a related technical field. Core Programming: Excellent command of a primary data engineering language (Scala, Python, or Java) with a strong foundation in OOPS and functional programming concepts. Big Data Technologies: Hands-on experience with data processing frameworks (e.g., Hadoop, Spark, Apache Hive, NiFi, Ozone, Kudu), ideally including streaming technologies (Kafka, Spark Streaming, Flink, etc.). Database Expertise: Excellent querying skills (SQL) and strong understanding of relational databases (e.g., MySQL, PostgreSQL). Experience with NoSQL databases (e.g., MongoDB, Cassandra) is a plus. End-to-End Pipelines: Demonstrated experience in implementing, optimizing, and maintaining complete data pipelines, integrating varied sources and sinks including streaming real-time data. Cloud Expertise: Knowledge of Cloud Technologies like Azure HDInsights, Synapse, EventHub and GCP DataProc, Dataflow, BigQuery. CI/CD Expertise: Experience with CI/CD methodologies and tools, including strong Linux and shell scripting skills for automation. Desired Skills & Attributes: Problem-Solving & Troubleshooting: Proven ability to analyze and solve complex data problems, troubleshoot data pipeline issues effectively. Communication & Collaboration: Excellent communication skills, both written and verbal, with the ability to collaborate across teams (data scientists, engineers, stakeholders). Continuous Learning & Adaptability: A demonstrated passion for staying up-to-date with emerging data technologies and a willingness to adapt to new tools.
Posted 4 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough