Jobs
Interviews

128 Apache Nifi Jobs - Page 2

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 9.0 years

3 - 4 Lacs

Hyderabad, Bengaluru, Mumbai (All Areas)

Work from Office

Integration Consultant o9 Platform Locations: Bangalore | Pune | Hyderabad | Mumbai | PAN India Experience: 4 to 15 Years Key Responsibilities: Act as the Integration Consultant for o9 implementation projects Understand and design using o9 platform’s data models, pipelines, and structures Analyze customer data for quality, completeness & technical alignment Collaborate on technical design, data gathering & suggest optimizations Configure batch schedules for regular integrations Implement complete E2E integration from external systems to o9 platform Technical Skills Required: Strong in SQL, PySpark, Python, Spark SQL, and ETL tools Experience with SQL Server / Oracle (DDL, DML, Stored Procedures) Must have delivered one end-to-end o9 integration project Nice to Have: Knowledge of Airflow , Delta Lake, NiFi, Kafka Experience with API-based integrations Professional Attributes: Strong communication , problem-solving , and analytical abilities Ability to work independently & collaboratively Positive attitude and proactive approach Educational Qualifications: BE/BTech/MCA or Bachelor’s/Master’s degree in Computer Science or relevant fields Take the next big step in your tech career!

Posted 2 weeks ago

Apply

4.0 - 6.0 years

7 - 12 Lacs

Hyderabad

Work from Office

As a Senior Software Engineer - ETL - Python at Incedo, you will be responsible for designing and developing ETL workflows to extract, transform, and load data from various sources to target systems. You will work with data analysts and architects to understand business requirements and translate them into technical solutions. You will be skilled in ETL tools such as Informatica or Talend and have experience in programming languages such as SQL or Python. You will be responsible for writing efficient and reliable code that is easy to maintain and troubleshoot. Roles & Responsibilities: Develop, maintain, and enhance software applications for Extract, Transform, and Load (ETL) processes Design and implement ETL solutions that are scalable, reliable, and maintainable Develop and maintain ETL code, scripts, and jobs, ensuring they are efficient, accurate, and meet business requirements Troubleshoot and debug ETL code, identifying and resolving issues in a timely manner Collaborate with cross-functional teams, including data analysts, business analysts, and project managers, to understand requirements and deliver solutions that meet business needs Design and implement data integration processes between various systems and data sources Optimize ETL processes to improve performance, scalability, and reliability Create and maintain technical documentation, including design documents, coding standards, and best practices. Technical Skills Skills Requirements: Proficiency in programming languages such as Python for writing ETL scripts. Knowledge of data transformation techniques such as filtering, aggregation, and joining. Familiarity with ETL frameworks such as Apache NiFi, Talend, or Informatica. Understanding of data profiling, data quality, and data validation techniques. Must have excellent communication skills and be able to communicate complex technical information to non-technical stakeholders in a clear and concise manner. Must understand the company's long-term vision and align with it. Provide leadership, guidance, and support to team members, ensuring the successful completion of tasks, and promoting a positive work environment that fosters collaboration and productivity, taking responsibility of the whole team. Nice-to-have skills Qualifications Qualifications 4-6 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred

Posted 2 weeks ago

Apply

4.0 - 6.0 years

6 - 10 Lacs

Chennai

Work from Office

As a Senior Software Engineer - ETL - Python at Incedo, you will be responsible for designing and developing ETL workflows to extract, transform, and load data from various sources to target systems. You will work with data analysts and architects to understand business requirements and translate them into technical solutions. You will be skilled in ETL tools such as Informatica or Talend and have experience in programming languages such as SQL or Python. You will be responsible for writing efficient and reliable code that is easy to maintain and troubleshoot. Roles & Responsibilities: Develop, maintain, and enhance software applications for Extract, Transform, and Load (ETL) processes Design and implement ETL solutions that are scalable, reliable, and maintainable Develop and maintain ETL code, scripts, and jobs, ensuring they are efficient, accurate, and meet business requirements Troubleshoot and debug ETL code, identifying and resolving issues in a timely manner Collaborate with cross-functional teams, including data analysts, business analysts, and project managers, to understand requirements and deliver solutions that meet business needs Design and implement data integration processes between various systems and data sources Optimize ETL processes to improve performance, scalability, and reliability Create and maintain technical documentation, including design documents, coding standards, and best practices. Technical Skills Skills Requirements: Proficiency in programming languages such as Python for writing ETL scripts. Knowledge of data transformation techniques such as filtering, aggregation, and joining. Familiarity with ETL frameworks such as Apache NiFi, Talend, or Informatica. Understanding of data profiling, data quality, and data validation techniques. Must have excellent communication skills and be able to communicate complex technical information to non-technical stakeholders in a clear and concise manner. Must understand the company's long-term vision and align with it. Provide leadership, guidance, and support to team members, ensuring the successful completion of tasks, and promoting a positive work environment that fosters collaboration and productivity, taking responsibility of the whole team. Nice-to-have skills Qualifications Qualifications 4-6 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred

Posted 2 weeks ago

Apply

7.0 - 12.0 years

8 - 18 Lacs

Bengaluru

Hybrid

Role - Cyber Data Pipeline Engineer Exp 7-14 Years Location – Bengaluru Description Overview We are seeking a skilled and motivated Data Pipeline Engineer to join our team. In this role, you will manage and maintain critical data pipeline platforms that collect, transform, and transmit cyber events data to downstream platforms, such as ElasticSearch and Splunk. You will be responsible for ensuring the reliability, scalability, and performance of the pipeline infrastructure while building complex integrations with cloud and on-premises cyber systems. Our key stakeholders are cyber teams including security response, investigations and insider threat. Role Profile A successful applicant will contribute to several important initiatives including: Collaborate with Cyber teams to identify, onboard, and integrate new data sources into the platform. Design and implement data mapping, transformation, and routing processes to meet analytics and monitoring requirements. Developing automation tools that integrate with in-house developed configuration management frameworks and APIs Monitor the health and performance of the data pipeline infrastructure. Working as a top-level escalation point to perform complex troubleshoots, working with other infrastructure teams to resolve issues Create and maintain detailed documentation for pipeline architecture, processes, and integrations. Required Skills Hands-on experience deploying and managing large-scale dataflow products like Cribl, Logstash or Apache NiFi Hands-on experience integrating data pipelines with cloud platforms (e.g., AWS, Azure, Google Cloud) and on-premises systems. Hands-on experience in developing and validating field extraction using regular expressions. A solid understanding of Operating Systems and Networking concepts: Linux/Unix system administration, HTTP and encryption. Good understanding of software version control, deployment & build tools using DevOps SDLC practices (Git, Jenkins, Jira) Strong analytical and troubleshooting skills Excellent verbal & written communication skills Appreciation of Agile methodologies, specifically Kanban Desired Skills Enterprise experience with a distributed event streaming platform like Apache Kafka, AWS Kinesis, Google Pub/Sub, MQ Infrastructure automation and integration experience, ideally using Python and Ansible Familiarity with cybersecurity concepts, event types, and monitoring requirements. Experience in Parsing and Normalizing data in Elasticsearch using Elastic Common Schema (ECS)

Posted 2 weeks ago

Apply

2.0 - 5.0 years

5 - 8 Lacs

Gurugram

Work from Office

Programming Languages: Python, Scala Machine Learning frameworks: Scikit Learn, Xgboost, Tensorflow, Keras, PyTorch, Spacy, Gensim, Stanford NLP, NLTK, Open CV, Spark MLlib, . Machine Learning Algorithms experience good to have Scheduling experience: Airflow Big Data/ Streaming/ Queues: Apache Spark, Apache Nifi, Apache Kafka, RabbitMQ any one of them Databases: MySQL, Mongo/Redis/Dynamo DB, Hive Source Control: GIT Cloud: AWS Build and Deployment: Jenkins, Docker, Docker Swarm, Kubernetes. BI tool: Quicksight(preferred) else any BI tool (Must have)

Posted 2 weeks ago

Apply

8.0 - 13.0 years

25 - 30 Lacs

Pune

Hybrid

1. Experienced with asynchronous programming, multithreading, implementing APIs, and Microservices, including Spring Boot 2. Proficiency with SQL Required Candidate profile 8+ years of professional experience in Java 8 or higher -Strong expertise in Spring Boot -Solid understanding of microservices architecture Kafka, Messaging/ streaming stack,Junit, Code Optimization,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

Atomicwork is dedicated to revolutionizing the digital workplace experience by integrating people, processes, and platforms through AI automation. Our team is focused on developing a cutting-edge service management platform that empowers growing businesses to streamline operations and achieve business success. We are currently looking for a talented and driven Data Pipeline Engineer to join our team. As a Data Pipeline Engineer, your main responsibility will be to design, construct, and maintain scalable data pipelines that support our enterprise search capabilities. Your efforts will ensure that data from diverse sources is efficiently ingested, processed, and indexed, facilitating seamless and secure search experiences across the organization. We prioritize practical skills and a proactive approach over formal qualifications. While proficiency in programming languages like Python, Java, or Scala is essential, experience with data pipeline frameworks such as Apache Airflow and tools like Apache NiFi is highly valued. Familiarity with search platforms like Elasticsearch or OpenSearch, as well as knowledge of data ingestion, transformation, and indexing processes, are also crucial for this role. Additionally, a strong understanding of enterprise search concepts, data security best practices, and cloud platforms like AWS, GCP, or Azure is required. Experience with Model Context Protocol (MCP) would be advantageous. Your responsibilities as a Data Pipeline Engineer will include designing, developing, and maintaining data pipelines for enterprise search applications, implementing data ingestion processes from various sources, developing data transformation and enrichment processes, integrating with search platforms, ensuring data quality and integrity, monitoring pipeline performance, collaborating with cross-functional teams, implementing security measures, documenting pipeline architecture, processes, and best practices, and staying updated with industry trends in data engineering and enterprise search. At Atomicwork, you have the opportunity to contribute to the company's growth and development, from conception to execution. Our cultural values emphasize agency, taste, ownership, mastery, impatience, and customer obsession, fostering a positive and innovative workplace environment. We offer competitive compensation and benefits, including a fantastic team, convenient offices across five cities, paid time off, comprehensive health insurance, flexible allowances, and annual outings. If you are excited about the opportunity to work with us, click on the apply button to begin your application. Answer a few questions about yourself and your work, and await further communication from us regarding the next steps. If you have any additional queries or information to share, please feel free to reach out to us at careers@atomicwork.com.,

Posted 2 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

karnataka

On-site

You should have 6-9 years of experience and possess the following skills: - Proficiency in Python programming, including writing clean and efficient code. - Experience with frameworks like FastAPI for building microservices & RESTful APIs, Pytest for Unit Testing automation. - Understanding of core AWS services like EC2, S3, Lambda, and RDS. - Knowledge of AWS security best practices, including VPC, security groups, and IAM. - Knowledge of Kubernetes concepts (pods, services, deployments, namespaces, clusters, scaling, monitoring) and YAML files. - Experience with Apache NiFi for automating data flows between systems. - Ability to configure and manage NiFi processors for data ingestion and transformation. - Experience with continuous integration and continuous deployment (CI/CD) pipelines using DevOps tools like Jenkins, Git, Kompass. - Knowledge of managing relational databases on AWS RDS, Proficiency in SQL for querying and managing data, and performance tuning. - Experience in executing projects in an Agile environment. Skills that are good to have: - Knowledge on Oracle Application R12. - Experience in Oracle PL/SQL for writing and debugging stored procedures, functions, and triggers. - Oracle SOA Suite for building, deploying, and managing service-oriented architectures. - Experience with BPEL (Business Process Execution Language) for orchestrating business processes.,

Posted 2 weeks ago

Apply

12.0 - 15.0 years

5 - 5 Lacs

Thiruvananthapuram

Work from Office

Role Proficiency: Resolve complex trouble tickets spanning across different technologies and fine tune infrastructure for optimum performance and/or provide technical people and financial management (Hierarchical or Lateral) Outcomes: 1) Mentor new team members in understanding customer infrastructure and processes2) Review and approve RCA prepared by team and drive corrective and preventive actions for permanent resolution3) Review problem tickets for timely closure. Measure incident reduction achieve by problem record for showcasing during Governance review4) Provide technical leadership for change implementation 5) Review CSI initiatives and aggressively drive for time bound closure to achieve optimization and efficiency goals6) Drive delivery finance goal to achieve project forecast numbers 7) Work on proposals and PIP for identified greenfield opportunity to increase revenue Measures of Outcomes: 1) SLA Adherence2) Time bound resolution of elevated tickets - OLA3) Manage ticket backlog timelines - OLA4) Adhere to defined process - Number of NCs in internal/external Audits5) Number of KB articles created6) Number of incidents and change tickets handled 7) Number of elevated tickets resolved8) Number of successful change tickets9) % Completion of all mandatory training requirements10) Overall financial goals of project11) % of incident reduction by problem management12) Number of repeated escalations for the same technical issue Outputs Expected: Resolution/Response: Daily review of resolution and response SLA for early intervention of SLA management Troubleshooting: Troubleshooting based on available information from previous tickets or consulting with seniors. Participate in online knowledge forums reference. Convert the new steps to KB article Perform logical/analytical troubleshooting. Work on problem tickets to identify permanent solutions. Assist and lead technical teams to rope in technology experts for complex issues Escalation/Elevation: Escalate within organization/customer peer in case of resolution delay. Define OLA between delivery layers (L1 L2 L3 etc) and enforce adherence. SPOC for any customer and leadership escalations Tickets Backlog/Resolution: Follow up on tickets based on agreed timelines manage ticket backlogs/last activity as per defined process. Resolve incidents and SRs within agreed timelines. Execute change tickets for infrastructure. Runbook/KB: Review KB compliance and suggest changes. Initiate and drive periodic SOP review with customer stake holders Collaboration: Collaborate with different towers of delivery for ticket resolution (within SLA) Resolve L1 tickets with help from respective tower. Collaborate with other team members for timely resolution of tickets. Actively participate in team/organization-wide initiatives. Co-ordinate with UST ISMS teams for resolving connectivity related issues Stakeholder Management: Lead the customer calls and vendor calls. Organize meeting with different stake holders. Take ownership for function's internal communications and related change management. Strategic: Define the strategy on data management policy management and data retention management. Support definition of the IT strategy for the function's relevant scope and be accountable for ensuring the strategy is tracked benchmarked and updated for the area owned. Process Adherence: Thorough understanding of organization and customer defined process. Suggest process improvements and CSI ideas. Adhere to organization' s policies and business conduct Process/efficiency Improvement: Proactively identify opportunities to increase service levels and mitigate any issues in service delivery within the function or across functions. Take accountability for overall productivity efforts within the function including coordination of function specific tasks and close collaboration with Finance. Process Implementation: Coordinate and monitor IT process implementation within the function Compliance: Support information governance activities and audit preparations within the function. Act as a function SPOC for IT audits in local sites (incl. preparation interface to local organization mitigation of findings etc.) and work closely with ISRM (Information Security Risk Management). Coordinate overall objective setting preparation and facilitate process in order to achieve consistent objective setting in function Job Description. Coordination Support for CSI across all services in CIS and beyond. Training: On time completion of all mandatory training requirements of organization and customer. Provide On floor training and one-on-one mentorship for new joiners. Complete certification of respective career paths. Explore cross training possibilities for improved efficiency and career growth. Performance Management: Update FAST Goals in NorthStar; track report and seek continues feedback from peers and manager. Set goals for team members and mentees and provide feedback. Assist new team members understanding the customer environment day-to-day operations and people management for example roster transport and leaves. Prepare weekly/Monthly/Quarterly governance review slides. Drive finance goals of the account. Skill Examples: 1) Good communication skills (Written verbal and email etiquette) to interact with different teams and customers. 2) Modify / Create runbooks based on suggested changes from juniors or newly identified steps3) Ability to work on an elevated server tickets to resolution4) Networking:a. Trouble shooting skills in static and Dynamic routing protocolsb. Should be capable of running netflow analyzers in different product lines5) Server:a. Skills in installing and configuring active directory DNS DHCP DFS IIS patch managementb. Excellent troubleshooting skills in various technologies like AD replication DNS issues etc.c. Skills in managing high availability solutions like failover clustering Vmware clustering etc.6) Storage and Back up:a. Ability to give recommendations to customers. Perform Storage and backup enhancements. Perform change management.b. Skilled in in core fabric technology storage design and implementation. Hands on experience in backup and storage Command Line Interfacesc. Perform Hardware upgrades firmware upgrades vulnerability remediation storage and backup commissioning and de-commissioning replication setup and management.d. Skilled in server Network and virtualization technologies. Integration of virtualization storage and backup technologiese. Review the technical diagrams architecture diagrams and modify the SOP and documentations based on business requirements.f. Ability to perform the ITSM functions for storage and backup team; review the quality of ITSM process followed by the team.7) Cloud:a. Skilled in any one of the cloud technologies - AWS Azure GCP.8) Tools:a. Skilled in administration and configuration of monitoring tools like CA UIM SCOM Solarwinds Nagios ServiceNow etcb. Skilled in SQL scriptingc. Skilled in building Custom Reports on Availability and performance of IT infrastructure building based on the customer requirements9) Monitoring:a. Skills in monitoring of infrastructure and application components10) Database:a. Data modeling and database design; Database schema creation and managementb. Identification of data integrity violations so that only accurate and appropriate data is entered and maintained.c. Backup and recoveryd. Web-specific tech expertise for e-Biz Cloud etc. Examples of this type of technology include XML CGI Java Ruby firewalls SSL and so on.e. Migrating database instances to new hardware and new versions of software from on premise to cloud based databases and vice versa.11) Quality Analysis: a. Ability to drive service excellence and continuous improvement within the framework defined by IT Operations Knowledge Examples: 1) Good understanding of customer infrastructure and related CIs. 2) ITIL Foundation certification3) Thorough hardware knowledge 4) Basic understanding of capacity planning5) Basic understanding of storage and backup6) Networking:a. Hands-on experience in Routers witches and Firewallsb. Should have minimum knowledge and hands-on with BGPc. Good understanding in Load balancers and WAN optimizersd. Advance back and restore knowledge in backup tools7) Server:a. Basic to intermediate powershell / BASH/Python scripting knowledge and demonstrated experience in script based tasksb. Knowledge of AD group policy management group policy tools and troubleshooting GPO sc. Basic AD object creation DNS concepts DHCP DFSd. Knowledge with tools like SCCM SCOM administration8) Storage and Backup:a. Subject Matter Expert in any of the and Backup technologies9) Tools:a. Proficient in understanding and troubleshooting of Windows and Linux family of operating systems10) Monitoring:a. Strong knowledge in ITIL process and functions11) Database:a. Knowledge in general database managementb. Knowledge in OS System and networking skills Additional Comments: We are hiring an Observability Engineer to architect, implement, and maintain enterprise-grade monitoring solutions. You will enhance visibility into system performance, application health, and security events. Key Responsibilities: - Build and manage observability frameworks using LogicMonitor, ServiceNow, BigPanda, NiFi - Implement log analytics via Azure Log Analytics and Azure Sentinel - Design observability dashboards using KQL, Splunk, and Grafana suites (Alloy, Beyla, K6, Loki, Thanos, Tempo) - Manage infrastructure observability for AKS deployments - Automate observability workflows using GitHub, PowerShell, and API Management - Collaborate with DevOps and platform teams to build end-to-end visibility Core Skills: - LogicMonitor, ServiceNow, BigPanda, NiFi - Azure Log Analytics, Azure Sentinel, KQL - Grafana suite (Alloy, Beyla, etc.), Splunk - AKS, Data Pipelines, GitHub - PowerShell, API Management Preferred Skills: - Working knowledge of Cribl - Exposure to distributed tracing and advanced metrics Soft Skills & Expectations: - Cross-team collaboration and problem-solving mindset - Self-starter and fast learner of emerging observability tools Required Skills big panda,Azure Automation,Azure

Posted 2 weeks ago

Apply

15.0 - 20.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : O9 Solutions Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :- As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with various teams to understand their needs, developing innovative solutions, and ensuring that applications are aligned with business objectives. You will engage in problem-solving activities, participate in team meetings, and contribute to the overall success of projects by leveraging your expertise in application development. Roles & Responsibilities:- Play the integration consultant role on o9 implementation projects. Understand o9 platforms data model (table structures, linkages, pipelines, optimal designs) for designing various planning use cases. Review and analyze the data provided by customer along with its technical/functional intent and inter-dependencies.- Participate in the technical design, data requirements gathering, making recommendations in case of inaccurate or missing data.- Work on designing and creating batch schedules based on frequency and configuration settings for daily/weekly/quarterly/yearly batches.- E2E integration implementation from partner system to o9 platform Technical Experience:- Must have minimum 3 to 7 years of experience on SQL, PySpark, Python, Spark SQL and ETL tools.- Proficiency in database (SQL Server, Oracle etc)- Knowledge of DDL, DML, stored procedures.- Good to have experience in Airflow, Dalta Lake, Nifi, Kafka. At least one E2E integration implementation experience will be preferred.- Any API based integration experience will be added advantage.Professional Experience:- Proven ability to work creatively and analytically in a problem-solving environment.- Proven ability to build, manage and foster a team-oriented environment.- Excellent problem-solving skills with excellent communication written/oral, interpersonal skills.- Strong collaborator- team player- and individual contributor. Additional Information:- The candidate should have minimum 5 years of experience in O9 Solutions.- This position is based at our Pune office.- A 15 years full time education is required.- BE/BTech/MCA/Bachelor's degree/masters degree in computer science and related fields of work are preferred.- Open to travel - short / long term Qualification 15 years full time education

Posted 2 weeks ago

Apply

15.0 - 20.0 years

5 - 9 Lacs

Pune

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : O9 Solutions Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :- As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with various teams to understand their needs, developing innovative solutions, and ensuring that applications are aligned with business objectives. You will engage in problem-solving activities, participate in team meetings, and contribute to the overall success of projects by leveraging your expertise in application development. Roles & Responsibilities:- Play the integration consultant role on o9 implementation projects. Understand o9 platforms data model (table structures, linkages, pipelines, optimal designs) for designing various planning use cases.- Review and analyze the data provided by customer along with its technical/functional intent and inter-dependencies.- Participate in the technical design, data requirements gathering, making recommendations in case of inaccurate or missing data. Work on designing and creating batch schedules based on frequency and configuration settings for daily/weekly/quarterly/yearly batches.- E2E integration implementation from partner system to o9 platform Technical Experience:- Must have minimum 3 to 7 years of experience on SQL, PySpark, Python, Spark SQL and ETL tools.- Proficiency in database (SQL Server, Oracle etc)- Knowledge of DDL, DML, stored procedures.- Good to have experience in Airflow, Dalta Lake, Nifi, Kafka. At least one E2E integration implementation experience will be preferred.- Any API based integration experience will be added advantage.Professional Experience:- Proven ability to work creatively and analytically in a problem-solving environment.- Proven ability to build, manage and foster a team-oriented environment.- Excellent problem-solving skills with excellent communication written/oral, interpersonal skills.- Strong collaborator- team player- and individual contributor. Additional Information:- The candidate should have minimum 5 years of experience in O9 Solutions.- This position is based at our Pune office.- A 15 years full time education is required.- BE/BTech/MCA/Bachelor's degree/masters degree in computer science and related fields of work are preferred.- Open to travel - short / long term Qualification 15 years full time education

Posted 2 weeks ago

Apply

4.0 - 8.0 years

5 - 9 Lacs

Pune

Work from Office

Your role This position is responsible for administering the Splunk platforms for enterprise Security Information and Event Management (SIEM). The role involves working with asset owners to ensure the timely and efficient collection of computer security events and logs for the purpose of detecting and responding to information security incidents. Maintain all components of a distributed SPLUNK infrastructure including indexer clusters, search head clusters, and deployment servers. Provide overall management of the SPLUNK platform. Standardize SPLUNK forwarder deployment, configuration, and maintenance across Unix and Windows platforms. Troubleshoot SPLUNK server and forwarder problems and issues. Assist internal users in designing and maintaining production-quality dashboards. Monitor the SPLUNK infrastructure for capacity planning. Implement change requests and engineering tasks. Lead technical discussions in customer governance calls. Participate in technical audits. Identify opportunities for automation, standardization, and stabilization. Prepare/update/review run books, SOPs, and knowledge articles. Plan, prepare, and execute change processes and implementations. Perform OS-level performance monitoring and troubleshooting. Monitor and troubleshoot application and database layers (e.g., Apache, Tomcat, MySQL). Administer and maintain a 24/7 highly available Splunk environment. Work closely with clients, technicians, and managerial staff. Experience with Databricks, Kafka, and NiFi is an added advantage. Your profile Splunk Administrator with 4 to 8 years experience Dashboards, reports creation and Monitoring Experience with Splunk Phantom as well, would be given preference Work location Bengaluru,Mumbai,Pune & Hyderabad What Youll Love About Working Here You can shape yourcareerwith us. We offer a range of career paths and internal opportunities within Capgemini group. You will also get personalized career guidance from our leaders. You will get comprehensive wellness benefits including health checks, telemedicine, insurance with top-ups, elder care, partner coverage or new parent support via flexible work. At Capgemini, you can work oncutting-edge projectsin tech and engineering with industry leaders or createsolutionsto overcome societal and environmental challenges.

Posted 2 weeks ago

Apply

7.0 - 12.0 years

6 - 10 Lacs

Hyderabad, Ahmedabad, Gurugram

Work from Office

About the Role: Grade Level (for internal use): 10 The RoleSenior Software Developer The Team Do you love to collaborate & provide solutionsThis team comes together across eight different locations every single day to craft enterprise grade applications that serve a large customer base with growing demand and usage. You will use a wide range of technologies and cultivate a collaborative environment with other internal teams. The ImpactWe focus primarily developing, enhancing and delivering required pieces of information & functionality to internal & external clients in all client-facing applications. You will have a highly visible role where even small changes have very wide impact. Whats in it for you - Opportunities for innovation and learning new state of the art technologies - To work in pure agile & scrum methodology Responsibilities: Design and implement software-related projects. Perform analyses and articulate solutions. Design underlying engineering for use in multiple product offerings supporting a large volume of end-users. Develop project plans with task breakdowns and estimates. Manage and improve existing solutions. Solve a variety of complex problems and figure out possible solutions, weighing the costs and benefits. What were Looking For: Basic Qualifications: Bachelor's degree in computer science or Equivalent 7+ years related experience Passionate, smart, and articulate developer Strong C#, .Net and SQL skills Experience implementingWeb Services (with WCF, RESTful JSON, SOAP, TCP), Windows Services, and Unit Tests Dependency Injection Able to demonstrate strong OOP skills Able to work well individually and with a team Strong problem-solving skills Good work ethic, self-starter, and results-oriented Agile/Scrum experience a plus. Exposure to Data Engineering and Big Data technologies like Hadoop, Big data processing engines/Scala, Nifi and ETL is a plus. Experience in Container platforms is a plus Experience working in cloud computing environment like AWS, Azure , GCP etc. Whats In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technologythe right combination can unlock possibility and change the world.Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you cantake care of business. We care about our people. Thats why we provide everything youand your careerneed to thrive at S&P Global. Health & WellnessHealth care coverage designed for the mind and body. Continuous LearningAccess a wealth of resources to grow your career and learn valuable new skills. Invest in Your FutureSecure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly PerksIts not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the BasicsFrom retail discounts to referral incentive awardssmall perks can make a big difference. For more information on benefits by country visithttps://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected andengaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, pre-employment training or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. ---- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ---- , SWP Priority Ratings - (Strategic Workforce Planning)

Posted 2 weeks ago

Apply

8.0 - 13.0 years

12 - 19 Lacs

Hyderabad, Bengaluru

Work from Office

Relevant and Total years of experience Relevant 8+ years Detailed job description - Skill Set: Job Summary: We are seeking a skilled Data Engineer with 6-8 years of experience in designing, building, and maintaining scalable data pipelines and ETL workflows. The ideal candidate will have hands-on expertise in Apache NiFi , KNIME , and other ETL tools, and will play a key role in ensuring data availability, quality, and reliability across the organization. Key Responsibilities: Design, develop, and maintain robust ETL pipelines using Apache NiFi, KNIME, and other tools. Integrate data from various sources including APIs, databases, flat files, and cloud storage. Optimize data workflows for performance, scalability, and reliability. Collaborate with data analysts, data scientists, and business stakeholders to understand data needs. Monitor and troubleshoot data pipeline issues and ensure data quality and consistency. Implement data governance, lineage, and documentation practices. Work with cloud platforms (AWS, Azure, or GCP) for data storage and processing. Required Skills & Qualifications: Bachelor’s or Master’s degree in Computer Science, Information Systems, or a related field. 5 years of experience in data engineering or a similar role. Strong hands-on experience with Apache NiFi and KNIME for data integration and transformation. Proficiency in SQL and scripting languages like Python or Shell. Experience with relational and NoSQL databases (e.g., PostgreSQL, MongoDB). Familiarity with data warehousing concepts and tools (e.g., Snowflake, Redshift, BigQuery). Understanding of data modeling, data quality, and data governance principles. Preferred: Experience with workflow orchestration tools like Apache Airflow or Luigi. Exposure to CI/CD practices and version control (e.g., Git). Knowledge of real-time data processing frameworks (e.g., Kafka, Spark Streaming).| Mandatory Skills ETL pipelines using Apache NiFi, KNIME , ETL , DWH , Python , SQL Work with cloud platforms (AWS/ Azure/ GCP/Snowflake ) for data storage and processing. Work Location Bangalore/Hyderabad (Mandatory to go to client location/office 5 days/week) Notice period 20-30 days, not more than that. WFO/WFH/Hybrid WFO WFO

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

Atomicwork is dedicated to revolutionizing the digital workplace experience by merging people, processes, and platforms through AI automation. The team is currently focused on developing a cutting-edge service management platform that empowers businesses to streamline operations and achieve success. We are in search of a talented and driven Data Pipeline Engineer to become a part of our dynamic team. As a Data Pipeline Engineer, you will play a pivotal role in designing, constructing, and managing scalable data pipelines that support our enterprise search capabilities. Your main responsibility will involve ensuring that data from diverse sources is effectively ingested, processed, and indexed to facilitate seamless and secure search experiences throughout the organization. Qualifications: - Proficiency in programming languages like Python, Java, or Scala. - Strong expertise in data pipeline frameworks and tools such as Apache Airflow and Apache NiFi. - Experience with search platforms like Elasticsearch or OpenSearch. - Familiarity with data ingestion, transformation, and indexing processes. - Understanding of enterprise search concepts including crawling, indexing, and query processing. - Knowledge of data security and access control best practices. - Experience with cloud platforms like AWS, GCP, or Azure and related Backend Engineer - Search/Integrations services. - Knowledge of Model Context Protocol (MCP) is advantageous. - Strong problem-solving and analytical skills. - Excellent communication and collaboration abilities. Responsibilities: - Design, develop, and maintain data pipelines for enterprise search applications. - Implement data ingestion processes from various sources like databases, file systems, and APIs. - Develop data transformation and enrichment processes to prepare data for indexing. - Integrate with search platforms to efficiently index and update data. - Ensure data quality, consistency, and integrity throughout the pipeline. - Monitor pipeline performance and troubleshoot issues as they arise. - Collaborate with cross-functional teams including data scientists, engineers, and product managers. - Implement security measures to safeguard sensitive data during processing and storage. - Document pipeline architecture, processes, and best practices. - Stay abreast of industry trends and advancements in data engineering and enterprise search. At Atomicwork, you have the opportunity to contribute to the company's growth from conceptualization to production. Our cultural values emphasize self-direction, attention to detail, ownership, continuous improvement, impatience for progress, and customer obsession. We offer competitive compensation and benefits including a fantastic team environment, well-located offices in five cities, unlimited sick leaves, comprehensive health insurance with 75% premium coverage, flexible allowances, and annual outings for team bonding. To apply for this role, click on the apply button, answer a few questions about yourself and your work, and await further communication from us regarding the next steps. For any additional inquiries, feel free to reach out to careers@atomicwork.com.,

Posted 2 weeks ago

Apply

5.0 - 10.0 years

25 - 30 Lacs

Bengaluru

Work from Office

Role & Apache Nifi 5+ years of hands-on experience with Apache NiFi, including developing, managing, and optimizing complex data flows in production environments. Proven experience with Cloudera NiFi (CDP Data Flow) in enterprise environments, including integration with Cloudera Manager. Experience migrating NiFi flows across major version upgrades with strong understanding of backward compatibility Strong proficiency in Groovy scripting, used for ExecuteScript and InvokeScriptedProcessor processors. Solid understanding of SSH and SFTP protocols, including authentication schemes (key-based, password), session negotiation, and file permissions handling in NiFi processors (e.g., ListSFTP, FetchSFTP, PutSFTP). Good grasp of data encryption mechanisms, key management, and secure flowfile handling using processors like EncryptContent. Experience integrating NiFi with MongoDB, including reading/writing documents via processors like GetMongo, PutMongo, and QueryMongo. Experience working with Apache Kafka, including producing and consuming from Kafka topics using NiFi (PublishKafka, ConsumeKafka), and handling schema evolution with Confluent Schema Registry. Strong knowledge of Red Hat Enterprise Linux (RHEL) environments, including systemd services, filesystem permissions, log rotation, and resource tuning for JVM-based applications like NiFi. NiFi-Specific Technical Requirements: In-depth knowledge of NiFi flow design principles, including proper use of queues, back pressure, prioritizers, and connection tuning. Mastery of controller services, including SSLContextService, DBCPConnectionPool, and RecordReader/RecordWriter services. Experience with Record-based processing using Avro, JSON, CSV schemas and Record processors like ConvertRecord, QueryRecord, and LookupRecord. Ability to debug and optimize NiFi flows using Data Provenance, bulletins, and log analysis. Familiarity with custom processor development in Java/Groovy (optional but preferred). Experience setting up secure NiFi clusters, configuring user authentication (LDAP, OIDC), TLS certificates, and access policies. Proficiency in parameter contexts, variable registry, and flow versioning using NiFi Registry. Understanding of Zero-Master clustering model, node coordination, and site-to-site protocol. Experience deploying and monitoring NiFi in high-availability, production-grade environments, including using Prometheus/Grafana or Cloudera Manager for metrics and alerting. Preferred Qualifications: Experience working in regulated or secure environments, with strict data handling and audit requirements. Familiarity with DevOps workflows, including version-controlled flow templates (JSON/XML), CI/CD integration for NiFi Registry, and automated deployment strategies. Strong written and verbal communication skills, with ability to document flows and onboard other engineers. responsibilities Preferred candidate profile

Posted 2 weeks ago

Apply

3.0 - 4.0 years

3 - 7 Lacs

Mumbai

Work from Office

Job Summary We are seeking an experienced and motivated Data Engineer to join our growing team, preferably with experience in the Banking, Financial Services, and Insurance (BFSI) sector. The ideal candidate will have a strong background in designing, building, and maintaining robust and scalable data infrastructure. You will play a crucial role in developing our data ecosystem, ensuring data quality, and empowering data-driven decisions across the organization. This role requires hands-on experience with the Google Cloud Platform (GCP) and a passion for working with cutting-edge data technologies. Responsibilities Design and Develop End-to-End Data Engineering Pipelines: Build, and maintain scalable and reliable data pipelines to ingest, process, and transform large volumes of structured and unstructured data from various sources. Implement Data Quality and Governance: Establish and enforce processes for data validation, transformation, auditing, and reconciliation to ensure data accuracy, completeness, and consistency. Build and Maintain Data Storage Solutions: Design, implement, and manage data vault and data mart to support business intelligence, analytics, and reporting requirements. Orchestrate and Automate Workflows: Utilize workflow management tools to schedule, monitor, and automate complex data workflows and ETL processes. Optimize Data Infrastructure: Continuously evaluate and improve the performance, reliability, and cost-effectiveness of our data infrastructure and pipelines. Collaborate with Stakeholders: Work closely with data analysts, data scientists, and business stakeholders to understand their data needs and deliver effective data solutions. Documentation: Create and maintain comprehensive documentation for data pipelines, processes, and architectures. Key Skills Python: Proficient in Python for data engineering tasks, including scripting, automation, and data manipulation. PySpark: Strong experience with PySpark for large-scale data processing and analytics. SQL: Expertise in writing complex SQL queries for data extraction, transformation, and analysis. Tech Stack (Must Have) Google Cloud Platform (GCP): Dataproc: For managing and running Apache Spark and Hadoop clusters. Composer (Airflow): For creating, scheduling, and monitoring data workflows. Cloud Functions: For event-driven serverless data processing. Cloud Run: For deploying and scaling containerized data applications. Cloud SQL: For managing relational databases. BigQuery: For data warehousing, analytics, and large-scale SQL queries. Qualifications Bachelor's degree in Computer Science, Engineering, Information Technology, or a related field. 3+ years of proven experience in a Data Engineer role. Demonstrable experience with the specified "must-have" tech stack. Strong problem-solving skills and the ability to work independently and as part of a team. Excellent communication and interpersonal skills. Good to Have Experience in the BFSI (Banking, Financial Services, and Insurance) domain. Apache NiFi: Experience with data flow automation and management. Qlik: Familiarity with business intelligence and data visualization tools. AWS: Knowledge of Amazon Web Services data services. DevOps and FinOps: Understanding of DevOps principles and practices (CI/CD, IaC) and cloud financial management (FinOps) to optimize cloud spending.

Posted 2 weeks ago

Apply

6.0 - 11.0 years

15 - 19 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Project description During the 2008 financial crisis, many big banks failed or faced issues due to liquidity issues. Lack of liquidity can kill any financial institution over the night. That's why it's so critical to constantly monitor liquidity risks and properly maintain collaterals. We are looking for a number of talented developers, who would like to join our team in Pune, which is building liquidity risk and collateral management platform for one of the biggest investment banks over the globe. The platform is a set of front-end tools and back-end engines. Our platform helps the bank to increase efficiency and scalability, reduce operational risk and eliminate the majority of manual interventions in processing margin calls. Responsibilities The candidate will work on development of new functionality for Liqudity Risk platform closely with other teams over the globe. Skills Must have BigData experience (6 years+); Java/python J2EE, Spark, Hive; SQL Databases; UNIX Shell; Strong Experience in Apache Hadoop, Spark, Hive, Impala, Yarn, Talend, Hue; Big Data Reporting, Querying and analysis. Nice to have Spark Calculators based on business logic/rules Basic performance tuning and troubleshooting knowledge Experience with all aspects of the SDLC Experience with complex deployment infrastructures Knowledge in software architecture, design and testing Data flow automation (Apache NiFi, Airflow etc) Understanding of difference between OOP and Functional design approach Understanding of an event driven architecture Spring, Maven, GIT, uDeploy;

Posted 3 weeks ago

Apply

7.0 - 12.0 years

11 - 15 Lacs

Gurugram

Work from Office

Project description We are looking for an experienced Data Engineer to contribute to the design, development, and maintenance of our database systems. This role will work closely with our software development and IT teams to ensure the effective implementation and management of database solutions that align with client's business objectives. Responsibilities The successful candidate would be responsible for managing technology in projects and providing technical guidance/solutions for work completion (1.) To be responsible for providing technical guidance/solutions (2.) To ensure process compliance in the assigned module and participate in technical discussions/reviews (3.) To prepare and submit status reports for minimizing exposure and risks on the project or closure of escalations (4.) Being self-organized, focused on develop on time and quality software Skills Must have At least 7 years of experience in development in Data specific projects. Must have working knowledge of streaming data Kafka Framework (kSQL/Mirror Maker etc) Strong programming skills in at least one of these programming language Groovy/Java Good knowledge of Data Structure, ETL Design, and storage. Must have worked in streaming data environments and pipelines Experience working in near real-time/Streaming Data pipeline development using Apache Spark/Streamsets/ Apache NIFI or similar frameworks Nice to have N/A

Posted 3 weeks ago

Apply

7.0 - 12.0 years

8 - 13 Lacs

Bengaluru

Work from Office

Date 25 Jun 2025 Location: Bangalore, KA, IN Company Alstom At Alstom, we understand transport networks and what moves people. From high-speed trains, metros, monorails, and trams, to turnkey systems, services, infrastructure, signalling, and digital mobility, we offer our diverse customers the broadest portfolio in the industry. Every day, 80,000 colleagues lead the way to greener and smarter mobility worldwide, connecting cities as we reduce carbon and replace cars. Your future role Take on a new challenge and apply your data engineering expertise in a cutting-edge field. Youll work alongside collaborative and innovative teammates. You'll play a key role in enabling data-driven decision-making across the organization by ensuring data availability, quality, and accessibility. Day-to-day, youll work closely with teams across the business (e.g., Data Scientists, Analysts, and ML Engineers), mentor junior engineers, and contribute to the architecture and design of our data platforms and solutions. Youll specifically take care of designing and developing scalable data pipelines, but also managing and optimizing object storage systems. Well look to you for: Designing, developing, and maintaining scalable and efficient data pipelines using tools like Apache NiFi and Apache Airflow. Creating robust Python scripts for data ingestion, transformation, and validation. Managing and optimizing object storage systems such as Amazon S3, Azure Blob, or Google Cloud Storage. Collaborating with Data Scientists and Analysts to understand data requirements and deliver production-ready datasets. Implementing data quality checks, monitoring, and alerting mechanisms. Ensuring data security, governance, and compliance with industry standards. Mentoring junior engineers and promoting best practices in data engineering. All about you We value passion and attitude over experience. Thats why we dont expect you to have every single skill. Instead, weve listed some that we think will help you succeed and grow in this role: Bachelors or Masters degree in Computer Science, Engineering, or a related field. 7+ years of experience in data engineering or a similar role. Strong proficiency in Python and data processing libraries (e.g., Pandas, PySpark). Hands-on experience with Apache NiFi for data flow automation. Deep understanding of object storage systems and cloud data architectures. Proficiency in SQL and experience with both relational and NoSQL databases. Familiarity with cloud platforms (AWS, Azure, or GCP). Exposure to the Data Science ecosystem, including tools like Jupyter, scikit-learn, TensorFlow, or MLflow. Experience working in cross-functional teams with Data Scientists and ML Engineers. Cloud certifications or relevant technical certifications are a plus. Things youll enjoy Join us on a life-long transformative journey the rail industry is here to stay, so you can grow and develop new skills and experiences throughout your career. Youll also: Enjoy stability, challenges, and a long-term career free from boring daily routines. Work with advanced data and cloud technologies to drive innovation. Collaborate with cross-functional teams and helpful colleagues. Contribute to innovative projects that have a global impact. Utilise our flexible and hybrid working environment. Steer your career in whatever direction you choose across functions and countries. Benefit from our investment in your development, through award-winning learning programs. Progress towards leadership roles or specialized technical paths. Benefit from a fair and dynamic reward package that recognises your performance and potential, plus comprehensive and competitive social coverage (life, medical, pension). You dont need to be a train enthusiast to thrive with us. We guarantee that when you step onto one of our trains with your friends or family, youll be proud. If youre up for the challenge, wed love to hear from you! Important to note As a global business, were an equal-opportunity employer that celebrates diversity across the 63 countries we operate in. Were committed to creating an inclusive workplace for everyone.

Posted 3 weeks ago

Apply

2.0 - 7.0 years

7 - 11 Lacs

Pune

Work from Office

JD : BigData and Hadoop JOB LOCATION : PUNE J D :: Experience in full stack software development with a focus on data-driven applications2. Experience with scale-out technologies like Hadoop, NiFi, Hive and Spark on-premise (With Java as a language)3. Strong understanding of database technologies, proficiency with SQL (Oracle preferably)4. Experience working with SpringBoot and Java technologies5. Background in big data with hands-on experience with Spark development6. Efficient knowledge of the internals of technologies in the Hadoop ecosystem is a plus.7. Proficiency with advanced object-oriented programming8. Excellent problem-solving and analytical skills9. Excellent written and oral communications skills10. Ability to mentor junior team members for Hadoop ecosystem

Posted 3 weeks ago

Apply

5.0 - 6.0 years

8 - 14 Lacs

Hyderabad

Work from Office

- Architect and optimize distributed data processing pipelines leveraging PySpark for high-throughput, low-latency workloads. - Utilize the Apache big data stack (Hadoop, Hive, HDFS) to orchestrate ingestion, transformation, and governance of massive datasets. - Engineer fault-tolerant, production-grade ETL frameworks ensuring seamless scalability and system resilience. - Interface cross-functionally with Data Scientists and domain experts to translate analytical needs into performant data solutions. - Enforce rigorous data quality controls and lineage mechanisms to uphold auditability and regulatory compliance. - Contribute to core architectural design, implement clean and modular Python/Java code, and drive performance benchmarking at scale. Required Skills : - 5-7 years of experience. - Strong hands-on experience with PySpark for distributed data processing. - Deep understanding of Apache ecosystem (Hadoop, Hive, Spark, HDFS, etc.) - Solid grasp of data warehousing, ETL principles, and data modeling. - Experience working with large-scale datasets and performance optimization. - Familiarity with SQL and NoSQL databases. - Proficiency in Python and basic to intermediate knowledge of Java. - Experience in using version control tools like Git and CI/CD pipelines. Nice-to-Have Skills : - Working experience with Apache NiFi for data flow orchestration. - Experience in building real-time streaming data pipelines. - Knowledge of cloud platforms like AWS, Azure, or GCP. - Familiarity with containerization tools like Docker or orchestration tools like Kubernetes. If you are interested in the above roles and responsibilities, please share your updated resume along with the following details : Name as per Aadhar card.: Mobile Number : Alternative Mobile : Mail ID : Alternative Mail ID : Date of Birth : Total EXP : Relevant EXP : Current CTC : ECTC : Notice period(LWD) : Updated resume : Holding Offer(If any) : Interview availability : PF / UAN Number : Any Career /Education Gap:

Posted 3 weeks ago

Apply

10.0 - 15.0 years

35 - 40 Lacs

Noida

Work from Office

Description: We are seeking a seasoned Manager – Data Engineering with strong experience in Databricks or the Apache data stack to lead complex data platform implementations. You will be responsible for leading high-impact data engineering engagements for global clients, delivering scalable solutions, and driving digital transformation. Requirements: Required Skills & Experience: • 12–18 years of total experience in data engineering, including 3–5 years in a leadership/managerial role. • Hands-on experience in Databricks OR core Apache stack – Spark, Kafka, Hive, Airflow, NiFi, etc. • Expertise in one or more cloud platforms: AWS, Azure, or GCP – ideally with Databricks on cloud. • Strong programming skills in Python, Scala, and SQL. • Experience in building scalable data architectures, delta lakehouses, and distributed data processing. • Familiarity with modern data governance, cataloging, and data observability tools. • Proven experience managing delivery in an onshore-offshore or hybrid model. • Strong communication, stakeholder management, and team mentoring capabilities. Job Responsibilities: Key Responsibilities: • Lead the architecture, development, and deployment of modern data platforms using Databricks, Apache Spark, Kafka, Delta Lake, and other big data tools. • Design and implement data pipelines (batch and real-time), data lakehouses, and large-scale ETL frameworks. • Own delivery accountability for data engineering programs across BFSI, telecom, healthcare, or manufacturing clients. • Collaborate with global stakeholders, product owners, architects, and business teams to understand requirements and deliver data-driven outcomes. • Ensure best practices in DevOps, CI/CD, infrastructure-as-code, data security, and governance. • Manage and mentor a team of 10–25 engineers, conducting performance reviews, capability building, and coaching. • Support presales activities including solutioning, technical proposals, and client workshops What We Offer: Exciting Projects: We focus on industries like High-Tech, communication, media, healthcare, retail and telecom. Our customer list is full of fantastic global brands and leaders who love what we build for them. Collaborative Environment: You Can expand your skills by collaborating with a diverse team of highly talented people in an open, laidback environment — or even abroad in one of our global centers or client facilities! Work-Life Balance: GlobalLogic prioritizes work-life balance, which is why we offer flexible work schedules, opportunities to work from home, and paid time off and holidays. Professional Development: Our dedicated Learning & Development team regularly organizes Communication skills training(GL Vantage, Toast Master),Stress Management program, professional certifications, and technical and soft skill trainings. Excellent Benefits: We provide our employees with competitive salaries, family medical insurance, Group Term Life Insurance, Group Personal Accident Insurance , NPS(National Pension Scheme ), Periodic health awareness program, extended maternity leave, annual performance bonuses, and referral bonuses. Fun Perks: We want you to love where you work, which is why we host sports events, cultural activities, offer food on subsidies rates, Corporate parties. Our vibrant offices also include dedicated GL Zones, rooftop decks and GL Club where you can drink coffee or tea with your colleagues over a game of table and offer discounts for popular stores and restaurants!

Posted 3 weeks ago

Apply

8.0 - 10.0 years

10 - 12 Lacs

Hyderabad

Work from Office

Details of the role: 8 to 10 years experience as Informatica Admin (IICS) Key responsibilities: Understand the programs service catalog and document the list of tasks which has to be performed for each Lead the design, development, and maintenance of ETL processes to extract, transform, and load data from various sources into our data warehouse. Implement best practices for data loading, ensuring optimal performance and data quality. Utilize your expertise in IDMC to establish and maintain data governance, data quality, and metadata management processes. Implement data controls to ensure compliance with data standards, security policies, and regulatory requirements. Collaborate with data architects to design and implement scalable and efficient data architectures that support business intelligence and analytics requirements. Work on data modeling and schema design to optimize database structures for ETL processes. Identify and implement performance optimization strategies for ETL processes, ensuring timely and efficient data loading. Troubleshoot and resolve issues related to data integration and performance bottlenecks. Collaborate with cross-functional teams, including data scientists, business analysts, and other engineering teams, to understand data requirements and deliver effective solutions. Provide guidance and mentorship to junior members of the data engineering team. Create and maintain comprehensive documentation for ETL processes, data models, and data flows. Ensure that documentation is kept up-to-date with any changes to data architecture or ETL workflows. Use Jira for task tracking and project management. Implement data quality checks and validation processes to ensure data integrity and reliability. Maintain detailed documentation of data engineering processes and solutions. Required Skills: Bachelor's degree in Computer Science, Engineering, or a related field. Proven experience as a Senior ETL Data Engineer, with a focus on IDMC / IICS Strong proficiency in ETL tools and frameworks (e.g., Informatica Cloud, Talend, Apache NiFi). Expertise in IDMC principles, including data governance, data quality, and metadata management. Solid understanding of data warehousing concepts and practices. Strong SQL skills and experience working with relational databases. Excellent problem-solving and analytical skills.

Posted 3 weeks ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Bengaluru

Work from Office

Apache Nifi 5+ years of hands-on experience with Apache NiFi, including developing, managing, and optimizing complex data flows in production environments. Proven experience with Cloudera NiFi (CDP Data Flow) in enterprise environments, including integration with Cloudera Manager. Experience migrating NiFi flows across major version upgrades with strong understanding of backward compatibility Strong proficiency in Groovy scripting, used for ExecuteScript and InvokeScriptedProcessor processors. Solid understanding of SSH and SFTP protocols, including authentication schemes (key-based, password), session negotiation, and file permissions handling in NiFi processors (e.g., ListSFTP, FetchSFTP, PutSFTP). Good grasp of data encryption mechanisms, key management, and secure flowfile handling using processors like EncryptContent. Experience integrating NiFi with MongoDB, including reading/writing documents via processors like GetMongo, PutMongo, and QueryMongo. Experience working with Apache Kafka, including producing and consuming from Kafka topics using NiFi (PublishKafka, ConsumeKafka), and handling schema evolution with Confluent Schema Registry. Strong knowledge of Red Hat Enterprise Linux (RHEL) environments, including systemd services, filesystem permissions, log rotation, and resource tuning for JVM-based applications like NiFi. NiFi-Specific Technical Requirements: In-depth knowledge of NiFi flow design principles, including proper use of queues, back pressure, prioritizers, and connection tuning. Mastery of controller services, including SSLContextService, DBCPConnectionPool, and RecordReader/RecordWriter services. Experience with Record-based processing using Avro, JSON, CSV schemas and Record processors like ConvertRecord, QueryRecord, and LookupRecord. Ability to debug and optimize NiFi flows using Data Provenance, bulletins, and log analysis. Familiarity with custom processor development in Java/Groovy (optional but preferred). Experience setting up secure NiFi clusters, configuring user authentication (LDAP, OIDC), TLS certificates, and access policies. Proficiency in parameter contexts, variable registry, and flow versioning using NiFi Registry. Understanding of Zero-Master clustering model, node coordination, and site-to-site protocol. Experience deploying and monitoring NiFi in high-availability, production-grade environments, including using Prometheus/Grafana or Cloudera Manager for metrics and alerting. Preferred Qualifications: Experience working in regulated or secure environments, with strict data handling and audit requirements. Familiarity with DevOps workflows, including version-controlled flow templates (JSON/XML), CI/CD integration for NiFi Registry, and automated deployment strategies. Strong written and verbal communication skills, with ability to document flows and onboard other engineers.

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies