Jobs
Interviews

128 Apache Nifi Jobs - Page 3

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 - 10.0 years

30 - 35 Lacs

Bengaluru

Hybrid

Position Purpose DPR team member guarantees application production for both technical and functional support for a given Business Line or Business Line application chain, on the IT asset management for the infrastructure (IAAS , PAAS, CAAS and IT cloud ). He/she ensures that the service is maintained in operational condition. He/she ensures the smooth running of the processes (batch process[1]ing, transfer, etc.) and the proper functioning in the application sense of the term (relevant business data, functional result, correct execution of application workflows, etc.). In a context of accelerating change, he/she manages the various releases within his/her scope. He/she intervenes not only in the event of incidents to restore the service, but also in preventive terms thanks to data analysis and automation, thus improving service quality and the user experience. He/she acts as an expert with the IT Ops particularly on production and infrastructure requirements. She/he acts as the guarantor of Production throughout the project cycle. He/she is a key player in taking production requirements into account: contribution to the definition of the technical solution, the stability of the technical environments (Production / non-Production), the monitoring and supervision of the technical solutions. He/she participates in project team meetings and is involved in activities and collective decisions (assessment of complexity, risks, etc.) throughout the project. Responsibilities Direct Responsibilities Design and develop real-time and batch data pipelines using tools like Apache NiFi, Apache Kafka, and Apache Airflow. Implement and maintain ETL/ELT processes, ensuring data quality and integrity across various data sources. Integrate and manage data flows across diverse systems using NiFi and Kafka topics. Monitor and optimize the performance of data pipelines and troubleshoot issues proactively. Work with the ELK stack (Elasticsearch, Logstash, Kibana) for logging, monitoring, and real-time analytics. Write efficient and optimized SQL queries for data extraction, transformation, and reporting. Collaborate with data analysts and business stakeholders to develop meaningful Power BI dashboards and reports. Maintain documentation for data workflows, processes, and architecture. Required Skills: Strong experience with Apache NiFi for data flow management. Expertise in Kafka (streaming data, Kafka Connect, schema management). Proficiency in Apache Airflow for orchestrating workflows. Solid understanding of SQL (writing complex queries, tuning, indexing). Experience working with the ELK stack Elasticsearch for search and analytics, Logstash for ingestion, Kibana for visualization. Experience with Power BI data modeling, DAX, dashboard development. Knowledge of data warehousing concepts, data modeling, and data lake architecture.

Posted 3 weeks ago

Apply

3.0 - 8.0 years

14 - 18 Lacs

Noida, Mumbai, Chennai

Work from Office

Application Support Engineer (Python+Nifi), Exp. 3+ Yrs Location: Chennai/Noida/Mumbai CTC: 14-18 LPA, Python -code reading, should be able to troubleshoot, analyse logs, support applications & coordinate with development teams for issue resolution.

Posted 3 weeks ago

Apply

4.0 - 8.0 years

6 - 10 Lacs

Mumbai

Work from Office

Role Overview: Lead the architectural design and implementation of a secure, scalable Cloudera-based Data Lakehouse for one of India’s top public sector banks. Key Responsibilities: * Design end-to-end Lakehouse architecture on Cloudera * Define data ingestion, processing, storage, and consumption layers * Guide data modeling, governance, lineage, and security best practices * Define migration roadmap from existing DWH to CDP * Lead reviews with client stakeholders and engineering teams Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Proven experience with Cloudera CDP, Spark, Hive, HDFS, Iceberg * Deep understanding of Lakehouse patterns and data mesh principles * Familiarity with data governance tools (e.g., Apache Atlas, Collibra) * Banking/FSI domain knowledge highly desirable.

Posted 4 weeks ago

Apply

10.0 - 15.0 years

35 - 50 Lacs

Mumbai

Work from Office

Overview of the Company: Jio Platforms Ltd. is a revolutionary Indian multinational tech company, often referred to as India's biggest startup, headquartered in Mumbai. Launched in 2019, it's the powerhouse behind Jio, India's largest mobile network with over 400 million users. But Jio Platforms is more than just telecom. It's a comprehensive digital ecosystem, developing cutting-edge solutions across media, entertainment, and enterprise services through popular brands like JioMart, JioFiber, and JioSaavn. Join us at Jio Platforms and be part of a fast-paced, dynamic environment at the forefront of India's digital transformation. Collaborate with brilliant minds to develop next-gen solutions that empower millions and revolutionize industries. Team Overview: The Data Platforms Team is the launchpad for a data-driven future, empowering the Reliance Group of Companies. We're a passionate group of experts architecting an enterprise-scale data mesh to unlock the power of big data, generative AI, and ML modelling across various domains. We don't just manage data we transform it into intelligent actions that fuel strategic decision-making. Imagine crafting a platform that automates data flow, fuels intelligent insights, and empowers the organization that's what we do. Join our collaborative and innovative team, and be a part of shaping the future of data for India's biggest digital revolution! About the role. Title : Lead Data Engineer Location : Mumbai Responsibilities: End-to-End Data Pipeline Development: Design, build, optimize, and maintain robust data pipelines across cloud, on-premises, or hybrid environments, ensuring performance, scalability, and seamless data flow. Reusable Components & Frameworks: Develop reusable data pipeline components and contribute to the team's data pipeline framework evolution. Data Architecture & Solutions: Contribute to data architecture design, applying data modelling, storage, and retrieval expertise. Data Governance & Automation: Champion data integrity, security, and efficiency through metadata management, automation, and data governance best practices. Collaborative Problem Solving: Partner with stakeholders, data teams, and engineers to define requirements, troubleshoot, optimize, and deliver data-driven insights. Mentorship & Knowledge Transfer: Guide and mentor junior data engineers, fostering knowledge sharing and professional growth. Qualification Details: Education: Bachelor's degree or higher in Computer Science, Data Science, Engineering, or a related technical field. Core Programming: Excellent command of a primary data engineering language (Scala, Python, or Java) with a strong foundation in OOPS and functional programming concepts. Big Data Technologies: Hands-on experience with data processing frameworks (e.g., Hadoop, Spark, Apache Hive, NiFi, Ozone, Kudu), ideally including streaming technologies (Kafka, Spark Streaming, Flink, etc.). Database Expertise: Excellent querying skills (SQL) and strong understanding of relational databases (e.g., MySQL, PostgreSQL). Experience with NoSQL databases (e.g., MongoDB, Cassandra) is a plus. End-to-End Pipelines: Demonstrated experience in implementing, optimizing, and maintaining complete data pipelines, integrating varied sources and sinks including streaming real-time data. Cloud Expertise: Knowledge of Cloud Technologies like Azure HDInsights, Synapse, EventHub and GCP DataProc, Dataflow, BigQuery. CI/CD Expertise: Experience with CI/CD methodologies and tools, including strong Linux and shell scripting skills for automation. Desired Skills & Attributes: Problem-Solving & Troubleshooting: Proven ability to analyze and solve complex data problems, troubleshoot data pipeline issues effectively. Communication & Collaboration: Excellent communication skills, both written and verbal, with the ability to collaborate across teams (data scientists, engineers, stakeholders). Continuous Learning & Adaptability: A demonstrated passion for staying up-to-date with emerging data technologies and a willingness to adapt to new tools.

Posted 4 weeks ago

Apply

8.0 - 13.0 years

30 - 35 Lacs

Bengaluru

Work from Office

8+ years of experience in software engineering with expertise in Java Spring Boot and cloud-native design. Design and implement distributed systems using Java Spring Boot, REST APIs, and cloud-native tooling Required Candidate profile Proven experience architecting large-scale, event-driven systems with Kafka, RabbitMQ, or similar technologies Microservices, CQRS, Event Sourcing in production environments

Posted 1 month ago

Apply

5.0 - 10.0 years

20 - 30 Lacs

Bengaluru

Work from Office

Role & responsibilities Were looking for an experienced Apache NIFI Developer to join our dynamic team! Design and implement end-to-end integration solutions using Apache NiFi and MiNiFi , with a focus on failover scenarios and high availability. Develop robust microservices using Java and Spring Boot , ensuring security, scalability, and performance. Leverage SSL/TLS , cryptography , and secure protocols (SFTP, Site-to-Site) to safeguard data integrity. Architect and optimize distributed systems using Zookeeper and modern microservices practices. Collaborate in Agile teams to deliver high-quality code, adhering to best practices in testing, documentation, and security. What Were Looking For: 5+ years of experience in NiFi processors and Java/Spring Boot microservices . 3+ years in application security, including SSL certificates and cryptography . 2+ years in designing distributed architectures . Banking/Financial domain experience is a plus! Strong understanding of resiliency , monitoring , and production-grade systems .

Posted 1 month ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Hyderabad, Bengaluru

Work from Office

Job Summary Synechron is seeking an experienced Big Data Developer with strong expertise in Spark, Scala, and Python to lead and contribute to large-scale data projects. The role involves designing, developing, and implementing robust data solutions that leverage emerging technologies to enhance business insights and operational efficiency. The successful candidate will play a key role in driving innovation, mentoring team members, and ensuring the delivery of high-quality data products aligned with organizational objectives. Software Requirements Required: Apache Spark (latest stable version) Scala (version 2.12 or higher) Python (version 3.6 or higher) Big Data tools and frameworks supporting Spark and Scala Preferred: Cloud platforms such as AWS, Azure, or GCP for data deployment Data processing or orchestration tools like Kafka, Hadoop, or Airflow Data visualization tools for data insights Overall Responsibilities Lead the development and implementation of data pipelines and solutions using Spark, Scala, and Python Collaborate with business and technology teams to understand data requirements and translate them into scalable solutions Mentor and guide junior team members on best practices in big data development Evaluate and recommend new technologies and tools to improve data processing and quality Stay informed about industry trends and emerging technologies relevant to big data and analytics Ensure timely delivery of data projects with high standards of quality, performance, and security Lead technical reviews, code reviews, and provide inputs to improve overall development standards and practices Contribute to architecture design discussions and assist in establishing data governance standards Technical Skills (By Category) Programming Languages: Essential: Spark (Scala), Python Preferred: Knowledge of Java or other JVM languages Data Management & Databases: Experience with distributed data storage solutions (HDFS, S3, etc.) Familiarity with NoSQL databases (e.g., Cassandra, HBase) and relational databases for data integration Cloud Technologies: Preferred: Cloud platforms (AWS, Azure, GCP) for data processing, storage, and deployment Frameworks & Libraries: Spark MLlib, Spark SQL, Spark Streaming Data processing libraries in Python (pandas, PySpark) Development Tools & Methodologies: Version control (Git, Bitbucket) Agile methodologies (Scrum, Kanban) Data pipeline orchestration tools (Apache Airflow, NiFi) Security & Compliance: Understanding of data security best practices and data privacy regulations Experience Requirements 5 to 10 years of hands-on experience in big data development and architecture Proven experience in designing and developing large-scale data pipelines using Spark, Scala, and Python Demonstrated ability to lead technical projects and mentor team members Experience working with cross-functional teams including data analysts, data scientists, and business stakeholders Track record of delivering scalable, efficient, and secure data solutions in complex environments Day-to-Day Activities Develop, test, and optimize scalable data pipelines using Spark, Scala, and Python Collaborate with data engineers, analysts, and stakeholders to gather requirements and translate into technical solutions Lead code reviews, mentor junior team members, and enforce coding standards Participate in architecture design and recommend best practices in big data development Monitor data workflows performance and troubleshoot issues to ensure data quality and reliability Stay updated with industry trends and evaluate new tools and frameworks for potential implementation Document technical designs, data flows, and implementation procedures Contribute to continuous improvement initiatives to optimize data processing workflows Qualifications Bachelors or Masters degree in Computer Science, Information Technology, or a related field Relevant certifications in cloud platforms, big data, or programming languages are advantageous Continuous learning on innovative data technologies and frameworks Professional Competencies Strong analytical and problem-solving skills with a focus on scalable data solutions Leadership qualities with the ability to guide and mentor team members Excellent communication skills to articulate technical concepts to diverse audiences Ability to work collaboratively in cross-functional teams and fast-paced environments Adaptability to evolving technologies and industry trends Strong organizational skills for managing multiple projects and priorities

Posted 1 month ago

Apply

8.0 - 12.0 years

17 - 20 Lacs

Bengaluru

Work from Office

We are currently seeking an experienced and visionary Data Architect to join our team. The successful candidate will lead the design and implementation of scalable and innovative data solutions. This role requires collaboration with various experts, including IoT specialists, data scientists, software engineers, and API architects, to develop high-quality data-driven platforms . RESPONSIBILITIES: Define, develop, manage & sustain core components of the data platform, such as: Multi-tenant data collection and storage Multi-tenant streaming and data processing Shared data model, including NoSQL modeling Data management and security components Multi-tenant and customizable analytical dashboards Develop IT solutions with partners (startups, IT companies, other industrial companies, suppliers, universities, research institutes) to package Software as a Service (SaaS) offering along a technical team of Data/DevOps engineers. Lead the design and architecture deployment of next-generation data solutions within a lean startup pathway, collaborating with data engineers, DevOps, engineering & mobility experts, data scientists, software engineers & HMI designers. Understand customer needs and provide an architecture design to meet these requirements. Own the overall technical vision of the solution regarding scalability, security, performance, reliability, and recovery. Build multi-tenant streaming and data processing capabilities in batch and near-real-time flows. Evaluate the opportunities from emerging technologies. Apply strong testing and quality assurance practices. EDUCATION : Bachelor's or Master's degree in Computer Science, Engineering, or a related field. Preferred:Data Science and/or Machine Learning, Cybersecurity. Competencies & Skills: Extensive knowledge of data modeling, analytics, and software architecture (preferably Java and Python). Proven experience in designing and developing scalable solutions with data processing engines for production environments (e.g., Apache Spark). Experience in various data management platforms and cloud technologies (e.g., Apache NIFI, Kubernetes, Docker, Elasticsearch). Experience in designing databases MySQL, PostgreSQL, Cassandra, MongoDB and eliminating performance bottlenecks. Knowledge of cloud technologies (Azure, AWS, GCP), msfabric, lakehouse, deltalake Knowledge of data science/machine learning or experience designing data pipelines for ML models. Knowledge of network and security:SSL, certificates, IPSEC, Active Directory, LDAP. Experience using data governance tools:Collibra, Apache Atlas. Knowledge of the Elastic/ELK stack. Knowledge about Machine Learning with scikit-learn, R, Tensorflow, or another AI framework or toolkit. Proven experience in deploying and maintaining solutions on cloud and/or on-premise environments. Proven experience in providing technical guidance to teams. Proven experience in managing customer expectations. Proven track record of driving decisions collaboratively, resolving conflicts, and ensuring follow-through. Extensive knowledge of data processing and software development, in Python or Java/Scala environment. Proven experience in designing stable solutions, testing, and debugging. Demonstrated technical guidance with worldwide teams. Demonstrated teamwork and collaboration in a professional setting. Proven capabilities with worldwide teams. Proficient in English; proficiency in French is a plus. Performance Measurements: On-Time Delivery (OTD) Developments Quality, Cost, and Delivery (QCD)

Posted 1 month ago

Apply

15.0 - 20.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : AWS Glue Good to have skills : Microsoft SQL Server, Python (Programming Language), Data EngineeringMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education:Developing a customer insights platform that will provide an ID graph and Digital customer view to help drive improvements in marketing decisions. Responsibilities:Design, build, and maintain data pipelines using AWS services (Glue, Neptune, S3).Participate in code reviews, testing, and optimization of data pipelines.Collaborate with stakeholders to understand data requirements and translate into technical solutions.:Proven experience as a Senior Data Engineer / Data Architect, or similar role.Knowledge of data governance and security practices.Extensive experience with data lake technologies (NiFi, Spark, Hive Metastore, Object Storage, Delta Lake Framework)Extensive experience with AWS cloud services, including AWS Glue, Neptune, S3 and LambdaExperience with AWS Neptune or other graph database technologies.Experience in data modelling and design.Experience with event driven architectureExperience with PythonExperience with SQLStrong problem-solving skills and attention to detail.Excellent communication and teamwork skills. Nice to have:Experience with observability solutions (Splunk, New Relic)Experience with Infrastructure as Code (Terraform, CloudFormation)Experience with CICD (Jenkins)Experience with KubernetesFamiliarity with data visualization tools.Support Engineer Similar skills as the above, but with more of a support focus, able to troubleshoot, patch and upgrade, minor enhancements and fixes to the infrastructure and pipelines. Experience with observability, cloudwatch, new relic and monitoring. Qualification 15 years full time education

Posted 1 month ago

Apply

7.0 - 12.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Data Modeller JD: We are seeking a skilled Data Modeller to join our Corporate Banking team. The ideal candidate will have a strong background in creating data models for various banking services, including Current Account Savings Account (CASA), Loans, and Credit Services. This role involves collaborating with the Data Architect to define data model structures within a data mesh environment and coordinating with multiple departments to ensure cohesive data management practices. Data Modelling: oDesign and develop data models for CASA, Loan, and Credit Services, ensuring they meet business requirements and compliance standards. Create conceptual, logical, and physical data models that support the bank's strategic objectives. Ensure data models are optimized for performance, security, and scalability to support business operations and analytics. Collaboration with Data Architect: Work closely with the Data Architect to establish the overall data architecture strategy and framework. Contribute to the definition of data model structures within a data mesh environment. Data Quality and Governance: Ensure data quality and integrity in the data models by implementing best practices in data governance. Assist in the establishment of data management policies and standards. Conduct regular data audits and reviews to ensure data accuracy and consistency across systems. Data Modelling ToolsERwin, IBM InfoSphere Data Architect, Oracle Data Modeler, Microsoft Visio, or similar tools. DatabasesSQL, Oracle, MySQL, MS SQL Server, PostgreSQL, Neo4j Graph Data Warehousing TechnologiesSnowflake, Teradata, or similar. ETL ToolsInformatica, Talend, Apache NiFi, Microsoft SSIS, or similar. Big Data TechnologiesHadoop, Spark (optional but preferred). TechnologiesExperience with data modelling on cloud platforms Microsoft Azure (Synapse, Data Factory)

Posted 1 month ago

Apply

9.0 - 14.0 years

10 - 20 Lacs

Mumbai, Bengaluru

Work from Office

Greetings from Future Focus Infotech!!! We have multiple opportunities Data architect Exp: 9+yrs Location : Mumbai / Bangalore Job Type- This is a Permanent position with Future Focus Infotech Pvt Ltd & you will be deputed with our client. A small glimpse about Future Focus Infotech Pvt Ltd. (Company URL: www.focusinfotech.com) If you are interested in above opportunity, send updated CV and below information to reema.b@focusinfotech.com Kindly mention the below details. Total Years of Experience: Current CTC: Expected CTC: Notice Period : Current location: Available for interview on weekdays: Pan Card : Thanks & Regards, Reema reema.b@focusinfotech.com 8925798887

Posted 1 month ago

Apply

16.0 - 21.0 years

18 - 22 Lacs

Gurugram

Work from Office

About the Role: OSTTRA India The Role Enterprise Architect - Integration The Team The OSTTRA Technology teamis composed of Capital Markets Technology professionals, who build,supportand protect the applications that operate our network. The technology landscapeincludeshigh-performance, high-volume applications as well as compute intensive applications,leveragingcontemporary microservices, cloud-based architectures. The Impact: Together, we build, support, protect and manage high-performance, resilient platforms that process more than 100 million messages a day. Our services are vital to automated trade processing around the globe, managing peak volumes and working with our customers and regulators to ensure the efficient settlement of trades and effective operation of global capital markets. Whats in it for you: The current objective is to identify individuals with 16+ years of experience who have high expertise, to join their existing team of experts who are spread across the world. This is your opportunity to start at the beginning and get the advantages of rapid early growth. This role is based out in Gurgaon and expected to work with different teams and colleagues across the globe. This is an excellent opportunity to be part of a team based out of Gurgaon and to work with colleagues across multiple regions globally. Responsibilities: The role shall be responsible for establishing, maintaining, socialising and realising the target state integration strategy for FX & Securities Post trade businesses of Osttra. This shall encompass the post trade lifecycle of our businesses including connectivity with clients, markets ecosystem and Osttras post trade family of networks and platforms and products. The role shall partner with product architects, product managers, delivery heads and teams for refactoring the deliveries towards the target state. They shall be responsible for the efficiency, optimisation, oversight and troubleshooting of current day integration solutions, platforms and deliveries as well, in addition target state focus. The role shall be expected to produce and maintain integration architecture blueprint. This shall cover current state and propose a rationalised view of target state of end-to-end integration flows and patterns. The role shall also provide for and enable the needed technology platforms/tools and engineering methods to realise the strategy. The role enable standardisation of protocols / formats (at least within Osttra world) , tools and reduce the duplication & non differentiated heavy lift in systems. The role shall enable the documentation of flows & capture of standard message models. Integration strategy shall also include transformation strategy which is so vital in a multi-lateral / party / system post trade world. Role shall partner with other architects and strategies / programmes and enable the demands of UI, application, and data strategies. What Were Looking For Rich domain experience of financial services industry preferably with financial markets, Pre/post trade life cycles and large-scale Buy/Sell/Brokerage organisations Should have experience of leading the integration strategies and delivering the integration design and architecture for complex programmes and financial enterprises catering to key variances of latency / throughput. Experience with API Management platforms (like AWS API Gateway, Apigee, Kong, MuleSoft Anypoint) and key management concepts (API lifecycle management, versioning strategies, developer portals, rate limiting, policy enforcement) Should be adept with integration & transformation methods, technologies and tools. Should have experience of domain modelling for messages / events / streams and APIs. Rich experience of architectural patterns like Event driven architectures, micro services, event streaming, Message processing/orchestrations, CQRS, Event sourcing etc. Experience of protocols or integration technologies like FIX, Swift, MQ, FTP, API etc. . including knowledge of authentication patterns (OAuth, mTLS, JWT, API Keys), authorization mechanisms, data encryption (in transit and at rest), secrets management, and security best practices Experience of messaging formats and paradigms like XSD, XML, XSLT, JSON, Protobuf, REST, gRPC, GraphQL etc Experience of technology like Kafka or AWS Kinesis, Spark streams, Kubernetes / EKS, AWS EMR Experience of languages like Java, python and message orchestration frameworks like Apache Camel, Apache Nifi, AWS Step Functions etc. Experience in designing and implementing traceability/observability strategies for integration systems and familiarity with relevant framework tooling. Experience of engineering methods like CI/CD, build deploy automation, Infra as code and integration testing methods and tools Should have appetite to review / code for complex problems and should find interests / energy in doing design discussions and reviews. Experience and strong understanding of multicloud integration patterns. The LocationGurgaon, India About Company Statement: About OSTTRA Candidates should note that OSTTRAis an independentfirm, jointly owned by S&P Global and CME Group. As part of the joint venture, S&P Global providesrecruitmentservices to OSTTRA - however, successful candidates will be interviewed and directly employed by OSTTRA, joiningour global team of more than 1,200 posttrade experts. OSTTRA was formed in 2021 through the combination of four businesses that have been at the heart of post trade evolution and innovation for the last 20+ yearsMarkitServ, Traiana, TriOptima and Reset. OSTTRA is a joint venture, owned 50/50 by S&P Global and CME Group. Joining the OSTTRA team is a unique opportunity to help build a bold new business with an outstanding heritage in financial technology, playing a central role in supporting global financial markets.Learn more atwww.osttra.com. Whats In It For You Benefits: We take care of you, so you cantake care of business. We care about our people. Thats why we provide everything youand your careerneed to thrive at S&P Global. Health & WellnessHealth care coverage designed for the mind and body. Continuous LearningAccess a wealth of resources to grow your career and learn valuable new skills. Invest in Your FutureSecure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly PerksIts not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the BasicsFrom retail discounts to referral incentive awardssmall perks can make a big difference. For more information on benefits by country visithttps://spgbenefits.com/benefit-summaries ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf -----------------------------------------------------------

Posted 1 month ago

Apply

5.0 - 9.0 years

15 - 19 Lacs

Bengaluru

Work from Office

Project description During the 2008 financial crisis, many big banks failed or faced issues due to liquidity issues. Lack of liquidity can kill any financial institution over the night. That's why it's so critical to constantly monitor liquidity risks and properly maintain collaterals. We are looking for a number of talented developers, who would like to join our team in Pune, which is building liquidity risk and collateral management platform for one of the biggest investment banks over the globe. The platform is a set of front-end tools and back-end engines. Our platform helps the bank to increase efficiency and scalability, reduce operational risk and eliminate the majority of manual interventions in processing margin calls. Responsibilities The candidate will work on development of new functionality for Liqudity Risk platform closely with other teams over the globe. Skills Must have BigData experience (6 years+); Java/python J2EE, Spark, Hive; SQL Databases; UNIX Shell; Strong Experience in Apache Hadoop, Spark, Hive, Impala, Yarn, Talend, Hue; Big Data Reporting, Querying and analysis. Nice to have Spark Calculators based on business logic/rules Basic performance tuning and troubleshooting knowledge Experience with all aspects of the SDLC Experience with complex deployment infrastructures Knowledge in software architecture, design and testing Data flow automation (Apache NiFi, Airflow etc) Understanding of difference between OOP and Functional design approach Understanding of an event driven architecture Spring, Maven, GIT, uDeploy; Other Languages EnglishB2 Upper Intermediate Seniority Senior

Posted 1 month ago

Apply

7.0 - 12.0 years

11 - 15 Lacs

Gurugram

Work from Office

Project description We are looking for an experienced Data Engineer to contribute to the design, development, and maintenance of our database systems. This role will work closely with our software development and IT teams to ensure the effective implementation and management of database solutions that align with client's business objectives. Responsibilities The successful candidate would be responsible for managing technology in projects and providing technical guidance/solutions for work completion: (1.) To be responsible for providing technical guidance/solutions (2.) To ensure process compliance in the assigned module and participate in technical discussions/reviews (3.) To prepare and submit status reports for minimizing exposure and risks on the project or closure of escalations (4.) Being self-organized, focused on develop on time and quality software Skills Must have At least 7 years of experience in development in Data specific projects. Must have working knowledge of streaming data Kafka Framework (kSQL/Mirror Maker etc) Strong programming skills in at least one of these programming language Groovy/Java Good knowledge of Data Structure, ETL Design, and storage. Must have worked in streaming data environments and pipelines Experience working in near real-time/Streaming Data pipeline development using Apache Spark/Streamsets/ Apache NIFI or similar frameworks Nice to have N/A Other Languages EnglishB2 Upper Intermediate Seniority Senior

Posted 1 month ago

Apply

5.0 - 8.0 years

10 - 14 Lacs

Chennai

Work from Office

Role Purpose The purpose of this role is to provide solutions and bridge the gap between technology and business know-how to deliver any client solution Please find the below JD Exp5-8 Years Good understanding of DWH GCP(Google Cloud Platform) BigQuery knowledge Knowledge of GCP Storage GCP Workflows and Functions Python CDC Extractor Tools like(Qlik/Nifi) BI Knowledge(like Power BI or looker) 2. Skill upgradation and competency building Clear wipro exams and internal certifications from time to time to upgrade the skills Attend trainings, seminars to sharpen the knowledge in functional/ technical domain Write papers, articles, case studies and publish them on the intranet Deliver No. Performance Parameter Measure 1. Contribution to customer projects Quality, SLA, ETA, no. of tickets resolved, problem solved, # of change requests implemented, zero customer escalation, CSAT 2. Automation Process optimization, reduction in process/ steps, reduction in no. of tickets raised 3. Skill upgradation # of trainings & certifications completed, # of papers, articles written in a quarter Mandatory Skills: Cloud-PaaS-GCP-Google Cloud Platform. Experience5-8 Years.

Posted 1 month ago

Apply

3.0 - 8.0 years

10 - 14 Lacs

Bengaluru

Work from Office

Project description We are looking for a KDB+ Test Engineer to join our expanding data reporting and analytics team, that are building solutions towards a strategic KDB+ Platform. Daily duties will involve working with data from various business areas by understanding requirements and developing test packs around the main data flows through the system. The system has analytics and solutions to aid our regulation projects as well as pricing/trading algorithms and ultimately P&L. Ideally you have experience with Q/KDB+ working in a similar environment and have front office knowledge of the FX, RATES business or quantitative finance.You should be comfortable building test-plans and test suites and manipulating of large data sets in a high frequency low latency environment. Responsibilities This role is an exciting opportunity to be part an agile multi-asset ecommerce trading system development team that is distributed between Singapore and London. The successful candidate is expected to: Take responsibility for the system component design and build Ensure developed code is fully tested through automated unit tests Build relationships with development and support teams Adhere to Bank's Testing practices Manage application support handover as part of the QA process Maintain and enhance the test coverage after project go-live Build relationships with fellow QA/developers inside/outside FM, Infrastructure units etc. Advocate delivery excellence, ensuring application release quality Skills Must have 3+ years of hands on experience in q/KDB Experience working in a Linux/Unix environment Experience of working closely with the development team and product owners on requirements Experience/ability to understand FX products and services. Excellent oral and written communication skills, ability to interact with business representatives. Experience in test plans, performance tests, and test automation Nice to have Understanding of ecommerce product workflow across front office and middle office tiers KDB/Q, Python, Java KX Control Suite Pipelines Apache Nifi / Azure DevOps Other Languages EnglishC2 Proficient Seniority Regular

Posted 1 month ago

Apply

5.0 - 6.0 years

6 - 8 Lacs

Hyderabad

Work from Office

- Architect and optimize distributed data processing pipelines leveraging PySpark for high-throughput, low-latency workloads. - Utilize the Apache big data stack (Hadoop, Hive, HDFS) to orchestrate ingestion, transformation, and governance of massive datasets. - Engineer fault-tolerant, production-grade ETL frameworks ensuring seamless scalability and system resilience. - Interface cross-functionally with Data Scientists and domain experts to translate analytical needs into performant data solutions. - Enforce rigorous data quality controls and lineage mechanisms to uphold auditability and regulatory compliance. - Contribute to core architectural design, implement clean and modular Python/Java code, and drive performance benchmarking at scale. Required Skills : - 5-7 years of experience. - Strong hands-on experience with PySpark for distributed data processing. - Deep understanding of Apache ecosystem (Hadoop, Hive, Spark, HDFS, etc.) - Solid grasp of data warehousing, ETL principles, and data modeling. - Experience working with large-scale datasets and performance optimization. - Familiarity with SQL and NoSQL databases. - Proficiency in Python and basic to intermediate knowledge of Java. - Experience in using version control tools like Git and CI/CD pipelines. Nice-to-Have Skills : - Working experience with Apache NiFi for data flow orchestration. - Experience in building real-time streaming data pipelines. - Knowledge of cloud platforms like AWS, Azure, or GCP. - Familiarity with containerization tools like Docker or orchestration tools like Kubernetes.

Posted 1 month ago

Apply

7.0 - 9.0 years

5 - 5 Lacs

Thiruvananthapuram

Work from Office

Azure Infrastructure Consultant - Cloud & Data Integration Experience: 8+ Years Employment Type: Full-Time Industry: Information Technology / Cloud Infrastructure / Data Engineering Job Summary: We are looking for a seasoned Azure Infrastructure Consultant with a strong foundation in cloud infrastructure, data integration, and real-time data processing. The ideal candidate will have hands-on experience across Azure and AWS platforms, with deep knowledge of Apache NiFi, Kafka, AWS Glue, and PySpark. This role involves designing and implementing secure, scalable, and high-performance cloud infrastructure and data pipelines. Key Responsibilities: Design and implement Azure-based infrastructure solutions, ensuring scalability, security, and performance. Lead hybrid cloud integration projects involving Azure and AWS services. Develop and manage ETL/ELT pipelines using AWS Glue, Apache NiFi, and PySpark. Architect and support real-time data streaming solutions using Apache Kafka. Collaborate with cross-functional teams to gather requirements and deliver infrastructure and data solutions. Implement infrastructure automation using tools like Terraform, ARM templates, or Bicep. Monitor and optimize cloud infrastructure and data workflows for cost and performance. Ensure compliance with security and governance standards across cloud environments. Required Skills & Qualifications: 8+ years of experience in IT infrastructure and cloud consulting. Strong hands-on experience with: Azure IaaS/PaaS (VMs, VNets, Azure AD, App Services, etc.) AWS services including Glue, S3, Lambda Apache NiFi for data ingestion and flow management Apache Kafka for real-time data streaming PySpark for distributed data processing Proficiency in scripting (PowerShell, Python) and Infrastructure as Code (IaC). Solid understanding of networking, security, and identity management in cloud environments. Strong communication and client-facing skills. Preferred Qualifications: Azure or AWS certifications (e.g., Azure Solutions Architect, AWS Data Analytics Specialty). Experience with CI/CD pipelines and DevOps practices. Familiarity with containerization (Docker, Kubernetes) and orchestration. Exposure to data governance tools and frameworks. Required Skills Azure,Microsoft Azure,Azure Paas,aws glue

Posted 1 month ago

Apply

5.0 - 6.0 years

4 - 8 Lacs

Bengaluru

Work from Office

- Architect and optimize distributed data processing pipelines leveraging PySpark for high-throughput, low-latency workloads. - Utilize the Apache big data stack (Hadoop, Hive, HDFS) to orchestrate ingestion, transformation, and governance of massive datasets. - Engineer fault-tolerant, production-grade ETL frameworks ensuring seamless scalability and system resilience. - Interface cross-functionally with Data Scientists and domain experts to translate analytical needs into performant data solutions. - Enforce rigorous data quality controls and lineage mechanisms to uphold auditability and regulatory compliance. - Contribute to core architectural design, implement clean and modular Python/Java code, and drive performance benchmarking at scale. Required Skills : - 5-7 years of experience. - Strong hands-on experience with PySpark for distributed data processing. - Deep understanding of Apache ecosystem (Hadoop, Hive, Spark, HDFS, etc.) - Solid grasp of data warehousing, ETL principles, and data modeling. - Experience working with large-scale datasets and performance optimization. - Familiarity with SQL and NoSQL databases. - Proficiency in Python and basic to intermediate knowledge of Java. - Experience in using version control tools like Git and CI/CD pipelines. Nice-to-Have Skills : - Working experience with Apache NiFi for data flow orchestration. - Experience in building real-time streaming data pipelines. - Knowledge of cloud platforms like AWS, Azure, or GCP. - Familiarity with containerization tools like Docker or orchestration tools like Kubernetes.

Posted 1 month ago

Apply

2.0 - 7.0 years

4 - 8 Lacs

Ahmedabad

Work from Office

Travel Designer Group Founded in 1999, Travel Designer Group has consistently achieved remarkable milestones in a relatively short span of time. While we embody the agility, growth mindset, and entrepreneurial energy typical of start-ups, we bring with us over 24 years of deep-rooted expertise in the travel trade industry. As a leading global travel wholesaler, we serve as a vital bridge connecting hotels, travel service providers, and an expansive network of travel agents worldwide. Our core strength lies in sourcing, curating, and distributing high-quality travel inventory through our award-winning B2B reservation platform, RezLive.com. This enables travel trade professionals to access real-time availability and competitive pricing to meet the diverse needs of travelers globally. Our expanding portfolio includes innovative products such as: * Rez.Tez * Affiliate.Travel * Designer Voyages * Designer Indya * RezRewards * RezVault With a presence in over 32+ countries and a growing team of 300+ professionals, we continue to redefine travel distribution through technology, innovation, and a partner-first approach. Website : https://www.traveldesignergroup.com/ Profile :- ETL Developer ETL Tools - any 1 -Talend / Apache NiFi /Pentaho/ AWS Glue / Azure Data Factory /Google Dataflow Workflow & Orchestration : any 1 good to have- not mandatory Apache Airflow/dbt (Data Build Tool)/Luigi/Dagster / Prefect / Control-M Programming & Scripting : SQL (Advanced) Python ( mandatory ) Bash/Shell (mandatory) Java or Scala (optional for Spark) -optional Databases & Data Warehousing MySQL / PostgreSQL / SQL Server / Oracle mandatory Snowflake - good to have Amazon Redshift - good to have Google BigQuery - good to have Azure Synapse Analytics - good to have MongoDB / Cassandra - good to have Cloud & Data Storage : any 1 -2 AWS S3 / Azure Blob Storage / Google Cloud Storage - mandatory Kafka / Kinesis / Pub/Sub Interested candidate also share your resume in shivani.p@rezlive.com

Posted 1 month ago

Apply

8.0 - 11.0 years

35 - 37 Lacs

Kolkata, Ahmedabad, Bengaluru

Work from Office

Dear Candidate, We are hiring a Data Engineer to build and maintain data pipelines for our analytics platform. Perfect for engineers focused on data processing and scalability. Key Responsibilities: Design and implement ETL processes Manage data warehouses and ensure data quality Collaborate with data scientists to provide necessary data Optimize data workflows for performance Required Skills & Qualifications: Proficiency in SQL and Python Experience with data pipeline tools like Apache Airflow Familiarity with big data technologies (Spark, Hadoop) Bonus: Knowledge of cloud data services (AWS Redshift, Google BigQuery) Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies

Posted 1 month ago

Apply

8.0 - 13.0 years

18 - 27 Lacs

Bengaluru

Work from Office

About Persistent We are a trusted Digital Engineering and Enterprise Modernization partner, combining deep technical expertise and industry experience to help our clients anticipate what’s next. Our offerings and proven solutions create a unique competitive advantage for our clients by giving them the power to see beyond and rise above. We work with many industry-leading organizations across the world including 12 of the 30 most innovative US companies, 80% of the largest banks in the US and India, and numerous innovators across the healthcare ecosystem. Our growth trajectory continues, as we reported $1,231M annual revenue (16% Y-o-Y). Along with our growth, we’ve onboarded over 4900 new employees in the past year, bringing our total employee count to over 23,500+ people located in 19 countries across the globe. Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. For more details please login to www.persistent.com About The Position We are looking for a Data Architect with creativity and results-oriented critical thinking to meet complex challenges and develop new strategies for acquiring, analyzing, modeling and storing data. In this role you will guide the company into the future and utilize the latest technology and information management methodologies to meet our requirements for effective logical data modeling, metadata management and database warehouse domains. You will be working with experts in a variety of industries, including computer science and software development, as well as department heads and senior executives to integrate new technologies and refine system performance. We reward dedicated performance with exceptional pay and benefits, as well as tuition reimbursement and career growth opportunities. What You?ll Do Define data retention policies Monitor performance and advise any necessary infrastructure changes Mentor junior engineers and work with other architects to deliver best in class solutions Implement ETL / ELT process and orchestration of data flows Recommend and drive adoption of newer tools and techniques from the big data ecosystem Expertise You?ll Bring 10+ years in industry, building and managing big data systems Building, monitoring, and optimizing reliable and cost-efficient pipelines for SaaS is a must Building stream-processing systems, using solutions such as Storm or Spark-Streaming Dealing and integrating with data storage systems like SQL and NoSQL databases, file systems and object storage like s3 Reporting solutions like Pentaho, PowerBI, Looker including customizations Developing high concurrency, high performance applications that are database-intensive and have interactive, browser-based clients Working with SaaS based data management products will be an added advantage Proficiency and expertise in Cloudera / Hortonworks Spark HDF and NiFi RDBMS, NoSQL like Vertica, Redshift, Data Modelling with physical design and SQL performance optimization Messaging systems, JMS, Active MQ, Rabbit MQ, Kafka Big Data technology like Hadoop, Spark, NoSQL based data-warehousing solutions Data warehousing, reporting including customization, Hadoop, Spark, Kafka, Core java, Spring/IOC, Design patterns Big Data querying tools, such as Pig, Hive, and Impala Open-source technologies and databases (SQL & NoSQL) Proficient understanding of distributed computing principles Ability to solve any ongoing issues with operating the cluster Scale data pipelines using open-source components and AWS services Cloud (AWS), provisioning, capacity planning and performance analysis at various levels Web-based SOA architecture implementation with design pattern experience will be an added advantage Benefits Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage : group term life , personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment •We offer hybrid work options and flexible working hours to accommodate various needs and preferences. •Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. Let's unleash your full potential. See Beyond, Rise Above.

Posted 1 month ago

Apply

8.0 - 13.0 years

18 - 30 Lacs

Pune

Work from Office

About Persistent We are a trusted Digital Engineering and Enterprise Modernization partner, combining deep technical expertise and industry experience to help our clients anticipate what’s next. Our offerings and proven solutions create a unique competitive advantage for our clients by giving them the power to see beyond and rise above. We work with many industry-leading organizations across the world including 12 of the 30 most innovative US companies, 80% of the largest banks in the US and India, and numerous innovators across the healthcare ecosystem. Our growth trajectory continues, as we reported $1,231M annual revenue (16% Y-o-Y). Along with our growth, we’ve onboarded over 4900 new employees in the past year, bringing our total employee count to over 23,500+ people located in 19 countries across the globe. Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. For more details please login to www.persistent.com About The Position We are looking for a Data Architect with creativity and results-oriented critical thinking to meet complex challenges and develop new strategies for acquiring, analyzing, modeling and storing data. In this role you will guide the company into the future and utilize the latest technology and information management methodologies to meet our requirements for effective logical data modeling, metadata management and database warehouse domains. You will be working with experts in a variety of industries, including computer science and software development, as well as department heads and senior executives to integrate new technologies and refine system performance. We reward dedicated performance with exceptional pay and benefits, as well as tuition reimbursement and career growth opportunities. What You?ll Do Define data retention policies Monitor performance and advise any necessary infrastructure changes Mentor junior engineers and work with other architects to deliver best in class solutions Implement ETL / ELT process and orchestration of data flows Recommend and drive adoption of newer tools and techniques from the big data ecosystem Expertise You?ll Bring 10+ years in industry, building and managing big data systems Building, monitoring, and optimizing reliable and cost-efficient pipelines for SaaS is a must Building stream-processing systems, using solutions such as Storm or Spark-Streaming Dealing and integrating with data storage systems like SQL and NoSQL databases, file systems and object storage like s3 Reporting solutions like Pentaho, PowerBI, Looker including customizations Developing high concurrency, high performance applications that are database-intensive and have interactive, browser-based clients Working with SaaS based data management products will be an added advantage Proficiency and expertise in Cloudera / Hortonworks Spark HDF and NiFi RDBMS, NoSQL like Vertica, Redshift, Data Modelling with physical design and SQL performance optimization Messaging systems, JMS, Active MQ, Rabbit MQ, Kafka Big Data technology like Hadoop, Spark, NoSQL based data-warehousing solutions Data warehousing, reporting including customization, Hadoop, Spark, Kafka, Core java, Spring/IOC, Design patterns Big Data querying tools, such as Pig, Hive, and Impala Open-source technologies and databases (SQL & NoSQL) Proficient understanding of distributed computing principles Ability to solve any ongoing issues with operating the cluster Scale data pipelines using open-source components and AWS services Cloud (AWS), provisioning, capacity planning and performance analysis at various levels Web-based SOA architecture implementation with design pattern experience will be an added advantage Benefits Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage : group term life , personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment •We offer hybrid work options and flexible working hours to accommodate various needs and preferences. •Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. Let's unleash your full potential. See Beyond, Rise Above.

Posted 1 month ago

Apply

8.0 - 13.0 years

18 - 25 Lacs

Hyderabad

Work from Office

About Persistent We are a trusted Digital Engineering and Enterprise Modernization partner, combining deep technical expertise and industry experience to help our clients anticipate what’s next. Our offerings and proven solutions create a unique competitive advantage for our clients by giving them the power to see beyond and rise above. We work with many industry-leading organizations across the world including 12 of the 30 most innovative US companies, 80% of the largest banks in the US and India, and numerous innovators across the healthcare ecosystem. Our growth trajectory continues, as we reported $1,231M annual revenue (16% Y-o-Y). Along with our growth, we’ve onboarded over 4900 new employees in the past year, bringing our total employee count to over 23,500+ people located in 19 countries across the globe. Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. For more details please login to www.persistent.com About The Position We are looking for a Data Architect with creativity and results-oriented critical thinking to meet complex challenges and develop new strategies for acquiring, analyzing, modeling and storing data. In this role you will guide the company into the future and utilize the latest technology and information management methodologies to meet our requirements for effective logical data modeling, metadata management and database warehouse domains. You will be working with experts in a variety of industries, including computer science and software development, as well as department heads and senior executives to integrate new technologies and refine system performance. We reward dedicated performance with exceptional pay and benefits, as well as tuition reimbursement and career growth opportunities. What You?ll Do Define data retention policies Monitor performance and advise any necessary infrastructure changes Mentor junior engineers and work with other architects to deliver best in class solutions Implement ETL / ELT process and orchestration of data flows Recommend and drive adoption of newer tools and techniques from the big data ecosystem Expertise You?ll Bring 10+ years in industry, building and managing big data systems Building, monitoring, and optimizing reliable and cost-efficient pipelines for SaaS is a must Building stream-processing systems, using solutions such as Storm or Spark-Streaming Dealing and integrating with data storage systems like SQL and NoSQL databases, file systems and object storage like s3 Reporting solutions like Pentaho, PowerBI, Looker including customizations Developing high concurrency, high performance applications that are database-intensive and have interactive, browser-based clients Working with SaaS based data management products will be an added advantage Proficiency and expertise in Cloudera / Hortonworks Spark HDF and NiFi RDBMS, NoSQL like Vertica, Redshift, Data Modelling with physical design and SQL performance optimization Messaging systems, JMS, Active MQ, Rabbit MQ, Kafka Big Data technology like Hadoop, Spark, NoSQL based data-warehousing solutions Data warehousing, reporting including customization, Hadoop, Spark, Kafka, Core java, Spring/IOC, Design patterns Big Data querying tools, such as Pig, Hive, and Impala Open-source technologies and databases (SQL & NoSQL) Proficient understanding of distributed computing principles Ability to solve any ongoing issues with operating the cluster Scale data pipelines using open-source components and AWS services Cloud (AWS), provisioning, capacity planning and performance analysis at various levels Web-based SOA architecture implementation with design pattern experience will be an added advantage Benefits Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage : group term life , personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment •We offer hybrid work options and flexible working hours to accommodate various needs and preferences. •Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. Let's unleash your full potential. See Beyond, Rise Above.

Posted 1 month ago

Apply

4.0 - 7.0 years

8 - 13 Lacs

Bengaluru

Work from Office

Date 18 Jun 2025 Location: Bangalore, KA, IN Company Alstom At Alstom, we understand transport networks and what moves people. From high-speed trains, metros, monorails, and trams, to turnkey systems, services, infrastructure, signalling, and digital mobility, we offer our diverse customers the broadest portfolio in the industry. Every day, 80,000 colleagues lead the way to greener and smarter mobility worldwide, connecting cities as we reduce carbon and replace cars. Your future role Take on a new challenge and apply your expertise in data solutions to a cutting-edge field. Youll work alongside innovative and collaborative teammates. You'll play a pivotal role in defining, developing, and sustaining advanced data solutions that empower our industrial programs. Day-to-day, youll work closely with teams across the business (such as IT, engineering, and operations), design scalable data models, ensure data quality, and much more. Youll specifically take care of building multi-tenant data collectors and processing units, as well as creating customizable analytical dashboards, but also evaluate opportunities from emerging technologies. Well look to you for: Designing technical solutions for production-grade and cyber-secure data systems Building multi-tenant data storage and streaming solutions for batch and near-real-time flows Creating scalable data models, including SQL and NoSQL modeling Enhancing data quality and applying robust data management and security practices Developing customizable analytical dashboards Applying strong testing and quality assurance practices All about you We value passion and attitude over experience. Thats why we dont expect you to have every single skill. Instead, weve listed some that we think will help you succeed and grow in this role: 5 to 10 years of experience in IT, digital companies, software development, or startups Extensive experience with data processing and software development in Python or Java/Scala environments Proficiency in developing solutions with Apache Spark, Apache Kafka, and/or Nifi for production Expertise in data modeling and SQL database configuration (e.g., Postgres, MariaDB, MySQL) Knowledge of DevOps practices, including Docker Experience with Git and release management Familiarity with cloud platforms like Microsoft Azure, AWS, or GCP (desirable) Understanding of network and security protocols such as SSL, certificates, IPSEC, Active Directory, and LDAP (desirable) Knowledge of machine learning frameworks like scikit-learn, R, or TensorFlow (desirable) Good understanding of the Apache open-source ecosystem (desirable) Fluent English; French is a plus Things youll enjoy Join us on a life-long transformative journey the rail industry is here to stay, so you can grow and develop new skills and experiences throughout your career. Youll also: Enjoy stability, challenges, and a long-term career free from boring daily routines Work with new security standards for rail signalling Collaborate with transverse teams and helpful colleagues Contribute to innovative projects Utilise our flexible and inclusive working environment Steer your career in whatever direction you choose across functions and countries Benefit from our investment in your development, through award-winning learning Progress towards senior data leadership roles Benefit from a fair and dynamic reward package that recognises your performance and potential, plus comprehensive and competitive social coverage (life, medical, pension) You dont need to be a train enthusiast to thrive with us. We guarantee that when you step onto one of our trains with your friends or family, youll be proud. If youre up for the challenge, wed love to hear from you! Important to note As a global business, were an equal-opportunity employer that celebrates diversity across the 63 countries we operate in. Were committed to creating an inclusive workplace for everyone.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies