Jobs
Interviews

464 Nifi Jobs - Page 16

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Job Title: Data Architect / Delivery Lead Job Summary: The Data Architect / Delivery Lead will provide technical expertise in the analysis, design, development, rollout, and maintenance of enterprise data models and solutions, utilizing both traditional and emerging technologies such as cloud, Hadoop, NoSQL, and real-time data processing. In addition to technical expertise, the role requires leadership in driving cross-functional teams, ensuring seamless project delivery, and fostering innovation within the team. The candidate must excel in managing data architecture projects while mentoring teams in data engineering practices, including PySpark , automation, and big data integration. Essential Duties Data Architecture Design and Development: Design and develop conceptual, logical, and physical data models for enterprise-scale data lakes and data warehouse solutions, ensuring optimal performance and scalability. Implement real-time and batch data integration solutions using modern tools and technologies such as PySpark, Hadoop, and cloud-based solutions (e.g., AWS, Azure, Google Cloud). Utilize PySpark for distributed data processing, transforming and analyzing large datasets for improved data-driven decision-making. Understand and apply modern data architecture philosophies such as Data Vault, Dimensional Modeling, and Data Lake design for building scalable and sustainable data solutions. Leadership & Delivery Management: Provide leadership in data architecture and engineering projects, ensuring the integration of modern technologies and best practices in data management and transformation. Act as a trusted advisor, collaborating with business users, technical staff, and project managers to define requirements and deliver high-quality data solutions. Lead and mentor a team of data engineers, ensuring the effective application of PySpark for data engineering tasks, and supporting continuous learning and improvement within the team. Manage end-to-end delivery of data projects, including defining timelines, managing resources, and ensuring timely, high-quality delivery while adhering to project methodologies (e.g., Agile, Scrum). Data Movement & Integration: Provide expertise in data integration processes, including batch and real-time data processing using tools such as PySpark, Informatica PowerCenter, SSIS, MuleSoft, and DataStage. Develop and optimize ETL/ELT pipelines, utilizing PySpark for efficient data processing and transformation at scale, particularly for big data environments (e.g., Hadoop ecosystems). Oversee data migration efforts, ensuring high-quality and consistent data delivery while managing data transformation and cleansing processes. Documentation & Communication: Create comprehensive functional and technical documentation, including data integration architecture documentation, data models, data dictionaries, and testing plans. Collaborate with business stakeholders and technical teams to ensure alignment and provide technical guidance on data-related decisions. Prepare and present technical content and architectural decisions to senior management, ensuring clear communication of complex data concepts. Skills and Experience: Data Engineering Skills: Extensive experience in PySpark for large-scale data processing, data transformation, and working with distributed systems. Proficient in modern data processing frameworks and technologies, including Hadoop, Spark, and Flink. Expertise in cloud-based data engineering technologies and platforms such as AWS Glue, Azure Data Factory, or Google Cloud Dataflow. Strong experience with data pipelines, ETL/ELT frameworks, and automation techniques using tools like Airflow, Apache NiFi, or dbt. Expertise in working with big data technologies and frameworks for both structured and unstructured data. Data Architecture and Modeling: 5-10 years of experience in enterprise data modeling, including hands-on experience with ERwin, ER/Studio, PowerDesigner, or similar tools. Strong knowledge of relational databases (e.g., Oracle, SQL Server, Teradata) and NoSQL technologies (e.g., MongoDB, Cassandra). In-depth understanding of data warehousing and data integration best practices, including dimensional modeling and working with OLTP systems and OLAP cubes. Experience with real-time data architectures and cloud-based data lakes, leveraging AWS, Azure, or Google Cloud platforms. Leadership & Delivery Skills: 3-5 years of management experience leading teams of data engineers and architects, ensuring alignment of team goals with organizational objectives. Strong leadership qualities such as innovation, critical thinking, communication, time management, and the ability to collaborate effectively across teams and stakeholders. Proven ability to act as a delivery lead for data projects, driving projects from concept to completion while managing resources, timelines, and deliverables. Ability to mentor and coach team members in both technical and professional growth, fostering a culture of knowledge sharing and continuous improvement. Other Essential Skills: Strong knowledge of SQL, PL/SQL, and proficiency in scripting for data engineering tasks. Ability to translate business requirements into technical solutions, ensuring that the data solutions support business strategies and objectives. Hands-on experience with metadata management, data governance, and master data management (MDM) principles. Familiarity with modern agile methodologies, such as Scrum or Kanban, to ensure iterative and successful project delivery. Preferred Skills & Experience: Cloud Technologies: Experience with cloud data platforms such as AWS Redshift, Google BigQuery, or Azure Synapse for building scalable data solutions. Leadership: Demonstrated ability to build and lead cross-functional teams, drive innovation, and solve complex data problems. Business Consulting: Consulting experience working with clients to deliver tailored data solutions, providing expert guidance on data architecture and data management practices. Data Profiling and Analysis: Hands-on experience with data profiling tools and techniques to assess and improve the quality of enterprise data. Real-Time Data Processing: Experience in real-time data integration and streaming technologies, such as Kafka and Kinesis. Show more Show less

Posted 2 months ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Title: ETL Tester About Us “Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount. With our presence across 32 cities across globe, we support 100+ clients across banking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO? You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage. MAKE AN IMPACT Job Title: ETL Tester Location: Pune Exp-5+years Key Responsibilities: Extensive experience in validating ETL processes, ensuring accurate data extraction, transformation, and loading across multiple environments. Proficient in Java programming, with the ability to understand and write Java code when required. Advanced skills in SQL for data validation, querying databases, and ensuring data consistency and integrity throughout the ETL process. Expertise in utilizing Unix commands to manage test environments, handle file systems, and execute system-level tasks. Proficient in creating shell scripts to automate testing processes, enhancing productivity and reducing manual intervention. Ensuring that data transformations and loads are accurate, with strong attention to identifying and resolving discrepancies in the ETL process. Focused on automating repetitive tasks and optimizing testing workflows to increase overall testing efficiency. Write and execute automated test scripts using Java to ensure the quality and functionality of ETL solutions. Utilize Unix commands and shell scripting to automate repetitive tasks and manage system processes. Collaborate with cross-functional teams, including data engineers, developers, and business analysts, to ensure the ETL processes meet business requirements. Ensure that data transformations, integrations, and pipelines are robust, secure, and efficient. Troubleshoot data discrepancies and perform root cause analysis for failed data loads. Create comprehensive test cases, execute them, and document test results for all data flows. Actively participate in the continuous improvement of ETL testing processes and methodologies. Experience with version control systems (e.g., Git) and integrating testing into CI/CD pipelines. Tools & Technologies (Good to Have): Experience with Hadoop ecosystem tools such as HDFS, MapReduce, Hive, and Spark for handling large-scale data processing and storage. Knowledge of NiFi for automating data flows, transforming data, and integrating different systems seamlessly. Experience with tools like Postman, SoapUI, or RestAssured to validate REST and SOAP APIs, ensuring correct data exchange and handling of errors. Show more Show less

Posted 2 months ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description What We Do At Goldman Sachs, we connect people, capital and ideas to help solve problems for our clients. We are a leading global financial services firm providing investment banking, securities and investment management services to a substantial and diversified client base that includes corporations, financial institutions, governments and individuals. In Private Wealth Management, we help our clients pursue their wealth management goals through careful advice & astute investment management. PWM Engineering plays a pivotal role in building the tools and applications our business needs to effectively manage and support our client’s diverse requirements. We support the entire user experience starting with onboarding through to trading, reporting, as well as providing clients access to their portfolios online via native iOS and Android apps. We also build and support numerous applications for our Business and Operations users to help them effectively manage risk and provide the best white-glove service possible to our clients. Controls Engineering is responsible for building the next generation firm-wide control plane for use in Asset & Wealth Management. The successful candidate will use their deep technical skills to build solutions to manage and implement risk management controls tailored to the diverse products and services offered to AWM clients. The role involves building out web applications that allow users to register, develop and administer controls on the platform. Your Impact Deliver and design new features for Control Management and Implementation Collaborate with cross-functional teams of Risk Managers and Control Managers to gather requirements and define technical solutions Design, develop and maintain complex software systems and applications Implement and maintain best practices for software development and engineering processes Develop and maintain software documentation, including design specifications, user guides and manuals Ensure the reliability, scalability, and performance of software systems Troubleshoot and debug complex software issues Collaborate with engineering leadership, developers, and operations through written and verbal presentations. Lead and participate in technical architecture reviews Mentor and coach junior engineers Basic Qualifications Bachelor's or Master's degree in Computer Science, Mathematics, or related field 3+ years of experience in software development and engineering Working knowledge of Java & React Solid understanding of software engineering principles, algorithms, and data structures Experience in developing large-scale, highly available, and distributed systems Experience in designing and implementing RESTful APIs and web services Strong problem-solving and analytical skills Strong communication and collaboration skills Experience with Agile software development methodologies Experience formulating clear and concise written and verbal descriptions of Software and Systems and tracking and managing delivery of the same Preferred Qualifications Experience with Big data/Apache Spark/Apache Nifi. Experience in the financial services industry. Goldman Sachs Engineering Culture At Goldman Sachs, we commit our people, capital and ideas to help our clients, shareholders and the communities we serve to grow. Founded in 1869, we are a leading global investment banking, securities and investment management firm. Headquartered in New York, we maintain offices around the world. We believe who you are makes you better at what you do. We're committed to fostering and advancing diversity and inclusion in our own workplace and beyond by ensuring every individual within our firm has a number of opportunities to grow professionally and personally, from our training and development opportunities and firmwide networks to benefits, wellness and personal finance offerings and mindfulness programs. Learn more about our culture, benefits, and people at GS.com/careers. We’re committed to finding reasonable accommodations for candidates with special needs or disabilities during our recruiting process. Learn more: https://www.goldmansachs.com/careers/footer/disability-statement.html © The Goldman Sachs Group, Inc., 2025. All rights reserved. Goldman Sachs is an equal employment/affirmative action employer Female/Minority/Disability/Veteran/Sexual Orientation/Gender Identity Show more Show less

Posted 2 months ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description What We Do At Goldman Sachs, we connect people, capital and ideas to help solve problems for our clients. We are a leading global financial services firm providing investment banking, securities and investment management services to a substantial and diversified client base that includes corporations, financial institutions, governments and individuals. In Private Wealth Management, we help our clients pursue their wealth management goals through careful advice & astute investment management. PWM Engineering plays a pivotal role in building the tools and applications our business needs to effectively manage and support our client’s diverse requirements. We support the entire user experience starting with onboarding through to trading, reporting, as well as providing clients access to their portfolios online via native iOS and Android apps. We also build and support numerous applications for our Business and Operations users to help them effectively manage risk and provide the best white-glove service possible to our clients. Controls Engineering is responsible for building the next generation firm-wide control plane for use in Asset & Wealth Management. The successful candidate will use their deep technical skills to build solutions to manage and implement risk management controls tailored to the diverse products and services offered to AWM clients. The role involves building out web applications that allow users to register, develop and administer controls on the platform. Your Impact Deliver and design new features for Control Management and Implementation Collaborate with cross-functional teams of Risk Managers and Control Managers to gather requirements and define technical solutions Design, develop and maintain complex software systems and applications Implement and maintain best practices for software development and engineering processes Develop and maintain software documentation, including design specifications, user guides and manuals Ensure the reliability, scalability, and performance of software systems Troubleshoot and debug complex software issues Collaborate with engineering leadership, developers, and operations through written and verbal presentations. Lead and participate in technical architecture reviews Mentor and coach junior engineers Basic Qualifications Bachelor's or Master's degree in Computer Science, Mathematics, or related field 3+ years of experience in software development and engineering Working knowledge of Java & React Solid understanding of software engineering principles, algorithms, and data structures Experience in developing large-scale, highly available, and distributed systems Experience in designing and implementing RESTful APIs and web services Strong problem-solving and analytical skills Strong communication and collaboration skills Experience with Agile software development methodologies Experience formulating clear and concise written and verbal descriptions of Software and Systems and tracking and managing delivery of the same Preferred Qualifications Experience with Big data/Apache Spark/Apache Nifi. Experience in the financial services industry. Goldman Sachs Engineering Culture At Goldman Sachs, we commit our people, capital and ideas to help our clients, shareholders and the communities we serve to grow. Founded in 1869, we are a leading global investment banking, securities and investment management firm. Headquartered in New York, we maintain offices around the world. We believe who you are makes you better at what you do. We're committed to fostering and advancing diversity and inclusion in our own workplace and beyond by ensuring every individual within our firm has a number of opportunities to grow professionally and personally, from our training and development opportunities and firmwide networks to benefits, wellness and personal finance offerings and mindfulness programs. Learn more about our culture, benefits, and people at GS.com/careers. We’re committed to finding reasonable accommodations for candidates with special needs or disabilities during our recruiting process. Learn more: https://www.goldmansachs.com/careers/footer/disability-statement.html © The Goldman Sachs Group, Inc., 2025. All rights reserved. Goldman Sachs is an equal employment/affirmative action employer Female/Minority/Disability/Veteran/Sexual Orientation/Gender Identity Show more Show less

Posted 2 months ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Lead Software Engineer - Java/Scala Development, Hadoop, Spark Overview As a Lead Software Engineer at the Loyalty Rewards and Segments Organization, you will be responsible for designing, developing, testing, and delivering software frameworks in the areas of event-driven architecture and zero trust for use in large-scale distributed systems. Loyalty Rewards and Segments is an organisation within Mastercard that provide end to end loyalty management solution for banks, merchants and Fintechs. The ideal candidate for this role will have a strong background in software design, development, and testing, with a passion for technology and software development. They will be highly motivated, intellectually curious, and analytical, with a desire to continuously learn and improve. As a member of the Loyalty Rewards and Segments team, you will have the opportunity to work on cutting-edge technologies and collaborate with cross-functional teams to deliver software frameworks that meet the needs of Mastercard's customers. Role Key Responsibilities Lead the technical direction, architecture, design, and engineering practices. Prototype and proving concepts for new technologies, application frameworks, and design patterns to improve software development practices. Design and develop software frameworks using industry-standard best practices and methodologies Write efficient and maintainable code that meets feature specifications Debug and troubleshoot code to resolve issues and improve performance Validate software functionality, including performance, reliability, and security Collaborate with cross-functional teams to architect and deliver new services Participate in code reviews to ensure code quality and consistency Document software design, development, and testing processes Balance trade-offs between competing interests with judgment and experience. Identify synergies and reuse opportunities across teams and programs. Key Expectations Focus on individual and team objectives as an active participant in the Agile/Scrum development process, completing assignments on time, with the necessary quality, and in accordance with the project timeline Continuously learn and keep up-to-date with the latest software development technologies and methodologies Communicate effectively and professionally with team members and stakeholders Proactively identify opportunities for process improvements and efficiency gains Demonstrate a commitment to quality, best practices, and continuous improvement All About You Current, deep, hands-on software engineering experience in architecture, design, and implementation of large-scale distributed systems. Rich experience and deep knowledge in event-driven architecture is a must, and zero trust architecture expertise is highly desirable. Proficiency in Java, Scala & SQL (Oracle, Postgres, H2, Hive, & HBase) & building pipelines Expertise and Deep understanding on Hadoop Ecosystem including HDFS, YARN, MapReduce, Tools like Hive, Pig/Flume, Data processing framework like Spark & Cloud platform, Orchestration Tools - Apache Nifi / Airflow, Apache Kafka Expertise in Web applications (Springboot Angular, Java, PCF), Web Services (REST/OAuth) and tools ( Sonar, Splunk, Dynatrace) is must Expertise SQL, Oracle and Postgres Experience with XP, TDD and BDD in the software development processes Familiar with secure coding standards (e.g., OWASP, CWE, SEI CERT) and vulnerability management Strong understanding of software engineering principles, design patterns, and best practices Excellent analytical and excellent problem-solving skills and experience working in an Agile environment. Strong verbal and written communication to demo features to product owners; strong leadership quality to mentor and support junior team members, proactive and has initiative to take development work from inception to implementation. Passion for technology and software development, with a strong desire to continuously learn and improve Comfortable taking thoughtful risks and acquiring expertise as needed. Able to foster a comfortable environment for tough technical discussions where everyone can be heard. Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. R-246306 Show more Show less

Posted 2 months ago

Apply

2.0 - 7.0 years

40 - 45 Lacs

Chandigarh

Work from Office

Responsibilities: Design and Develop complex data processes in coordination with business stakeholders to solve critical financial and operational processes. Design and Develop ETL/ELT pipelines against traditional databases and distributed systems and to flexibly produce data back to the business and analytics teams for analysis. Work in an agile, fail fast environment directly with business stakeholders and analysts, while recognising data reconciliation and validation requirements. Develop data solutions in coordination with development teams across a variety of products and technologies. Build processes that analyse and monitor data to help maintain controls - correctness, completeness and latency. Participate in design reviews and code reviews Work with colleagues across global locations Troubleshoot and resolve production issues Performance Enhancements Required Skills & Qualifications Programming Skills Python / PySpark / Scala Database Skills Analytical Databases like Snowflakes / SQL Good to have - Elastic Search , Kafka , Nifi , Jupyter Notebooks, Good to have - Knowledge of AWS services like S3 / Glue / Athena / EMR / lambda Requirements Responsibilities: Design and Develop complex data processes in coordination with business stakeholders to solve critical financial and operational processes. Design and Develop ETL/ELT pipelines against traditional databases and distributed systems and to flexibly produce data back to the business and analytics teams for analysis. Work in an agile, fail fast environment directly with business stakeholders and analysts, while recognising data reconciliation and validation requirements. Develop data solutions in coordination with development teams across a variety of products and technologies. Build processes that analyse and monitor data to help maintain controls - correctness, completeness and latency. Participate in design reviews and code reviews Work with colleagues across global locations Troubleshoot and resolve production issues Performance Enhancements Required Skills & Qualifications Programming Skills Python / PySpark / Scala Database Skills Analytical Databases like Snowflakes / SQL Good to have - Elastic Search , Kafka , Nifi , Jupyter Notebooks, Good to have - Knowledge of AWS services like S3 / Glue / Athena / EMR / lambda

Posted 2 months ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What you’ll be doing… We are looking for data engineers who can work with world class team members to help drive telecom business to its full potential . We are building data products / assets for telecom wireless and wireline business which includes consumer analytics, telecom network performance and service assurance analytics etc. We are working on cutting edge technologies like digital twin to build these analytical platforms and provide data support for varied AI ML implementations. As a data engineer you will be collaborating with business product owners , coaches , industry renowned data scientists and system architects to develop strategic data solutions from sources which includes batch, file and data streams As a subject matter expert of solutions & platforms, you will be responsible for providing technical leadership to various projects on the data platform team. You are expected to have depth of knowledge on specified technological areas, which includes knowledge of applicable processes, methodologies, standards, products and frameworks. Driving the technical design of large scale data platforms, utilizing modern and open source technologies, in a hybrid cloud environment Setting standards for data engineering functions; design templates for the data management program which are scalable, repeatable, and simple. Building strong multi-functional relationships and getting recognized as a data and analytics subject matter expert among other teams. Collaborating across teams to settle appropriate data sources, develop data extraction and business rule solutions. Sharing and incorporating best practices from the industry using new and upcoming tools and technologies in data management & analytics. Organizing, planning and developing solutions to sophisticated data management problem statements. Defining and documenting architecture, capturing and documenting non - functional (architectural) requirements, preparing estimates and defining technical solutions to proposals (RFPs). Designing & Developing reusable and scalable data models to suit business deliverables Designing & Developing data pipelines. Providing technical leadership to the project team to perform design to deployment related activities, provide guidance, perform reviews, prevent and resolve technical issues. Collaborating with the engineering, DevOps & admin team to ensure alignment to efficient design practices, and fix issues in dev, test and production environments from infrastructure is highly available and performing as expected. Designing, implementing, and deploying high-performance, custom solutions. Where you'll be working… In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. What we’re looking for... You are curious and passionate about Data and truly believe in the high impact it can create for the business. People count on you for your expertise in data management in all phases of the software development cycle. You enjoy the challenge of solving complex data management problems and challenging priorities in a multifaceted, complex and deadline-oriented environment. Building effective working relationships and collaborating with other technical teams across the organization comes naturally to you. You'll need to have… Six or more years of relevant experience. Knowledge of Information Systems and their applications to data management processes. Experience performing detailed analysis of business problems and technical environments and designing the solution. Experience working with Google Cloud Platform & BigQuery. Experience working with Bigdata Technologies & Utilities - Hadoop/Spark/Scala/Kafka/NiFi. Experience with relational SQL and NoSQL databases. Experience with data pipeline and workflow management & Governance tools. Experience with stream-processing systems. Experience with object-oriented/object function scripting languages. Experience building data solutions for Machine learning and Artificial Intelligence. Knowledge of Data Analytics and modeling tools. Even better if you have… Master’s degree in Computer Science or a related field Experience on Frontend/Web technologies; React JS, CSS, HTML Experience in and backend services; Java Spring Boot, Node JS Experience working with data and Visualization products. Certifications in any Data Warehousing/Analytics solutioning Certifications in GCP Ability to clearly articulate the pros and cons of various technologies and platforms Experience collaborating with multi-functional teams and managing partner expectations Written and verbal communication skills Ability to work in a fast-paced agile development environment #AI&D Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics. Show more Show less

Posted 2 months ago

Apply

10.0 years

0 Lacs

Mohali

On-site

Job Description Job Title: Chief AI Officer (CAIO) Location: Mohali Reports To: CEO Exp: 10+ Years About RChilli: RChilli is a global leader in HR Tech, delivering AI-driven solutions for resume parsing, data enrichment, and talent acquisition. We are looking for a visionary Chief AI Officer (CAIO) to drive AI strategy, innovation, and ethical AI deployment in HRTech. Key Responsibilities: AI Strategy & Leadership Develop and execute RChilli’s AI strategy aligned with business goals. Ensure ethical AI implementation and compliance with industry regulations. Be a change leader in adopting AI across the company. AI-Driven Product Innovation Lead AI research & development for NLP, machine learning, and predictive analytics. Implement AI for automated job descriptions, resume scoring, and candidate recommendations. Oversee AI-powered chatbots, workforce planning, and predictive retention models. Identify opportunities for AI implementation, including: Automated calls for candidate screening, interview scheduling, and feedback collection. AI-powered report generation for HR analytics, performance tracking, and compliance. AI-based note-taking and meeting summarization for enhanced productivity. Technology & Infrastructure Define and implement a scalable AI roadmap. Manage AI infrastructure, data lakes, ETL processes, and automation. Oversee data lakes and ETL tools such as Airflow and NiFi for efficient data management. Ensure robust data engineering and analysis frameworks. Generative AI, Conversational AI & Transformative AI Apply Generative AI for automating job descriptions, resume parsing, and intelligent recommendations. Leverage Conversational AI for chatbots, virtual assistants, and AI-driven HR queries. Utilize Transformative AI for workforce planning, sentiment analysis, and predictive retention models. Tool Identification & Implementation Identify business requirements and assess third-party AI tools available in the market. Implement and integrate AI tools to enhance operations and optimize business processes. Business Integration & Operations Collaborate with cross-functional teams to integrate AI into HRTech solutions. Understand and optimize business processes for AI adoption. Align AI-driven processes with business efficiency and customer needs. Leadership & Talent Development Build and mentor an AI team, fostering a culture of innovation. Promote AI literacy across the organization. Industry Thought Leadership Represent RChilli in AI forums, conferences, and industry partnerships. Stay ahead of AI trends and HRTech advancements. Required Skills & Qualifications: Technical Skills: Master’s/Ph.D. in Computer Science, AI, Data Science, or related field. 10+ years of experience in AI/ML, with 5+ years in leadership roles. Expertise in NLP, machine learning, deep learning, and predictive analytics. Experience in AI ethics, governance, and compliance frameworks. Strong proficiency in AI infrastructure, data engineering, and automation tools. Understanding of data lakes, ETL processes, Airflow, and NiFi tools. Clear concepts in data engineering and analysis. Leadership & Business Skills: Strategic thinker with the ability to align AI innovation with business goals. Excellent communication and stakeholder management skills. Experience in building and leading AI teams. Why Join RChilli? Lead AI Innovation: Shape AI-driven HR solutions in a globally recognized HRTech company. Impactful Work: Drive AI transformations in HR operations and talent acquisition. Growth & Learning: Work with a passionate AI research and product team. Competitive Package: Enjoy a competitive salary, benefits, and career growth opportunities. If you are a visionary AI leader ready to transform HRTech, join RChilli as our Chief AI Officer.

Posted 2 months ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About the role Refer you will be responsible section You will be responsible for We are seeking high-performing developers to work on re-platforming an on-premise digital wallet into a set of microservices. Developers will be expected to work on maintaining the legacy product and deliver business-driven changes alongside rebuild work. The candidate will be expected to be up to date with modern development technologies and techniques. You will be expected to have good communication skills and to challenge; where appropriate what how and why of code/designs to ensure the optimal end solution. -Good knowledge and working experience on Big data Hadoop ecosystem & distributed systems -Strong understanding of underlying Hadoop Architectural concepts and distributed computing paradigms -The Data Engineer would work with a highly efficient team of data scientists and data engineers -Excellent programming skills in Scala/Spark & shell scripting -Prior experience in using technologies like Oozie Hive Spark HBase Nifi Sqoop and Hbase and Zookeeper -Good knowledge on engineering practices like CI/CD Jenkins Maven & GIT Hub -Good experience on Kafka and Schema registry -Good exposure on cloud computing (Azure/AWS) -Aware of different design patterns optimization techniques locking principles -Should know how to scale systems and optimize performance using caching -Should have worked on batch and streaming pipelines -Implement end-to-end Hadoop ecosystem components and accompanying frameworks with minimal assistance. -Good understanding of NFRs ( scalability reliability maintainability usability fault-tolerant systems) -Drive out features via appropriate test frameworks. -Translate small behaviour requirements into tasks & code. -Develop high-quality code that can lead to rapid delivery. Ruthlessly pursuing continuous integration and delivery. -Commit code early and often demonstrating my understanding of version control & branching strategies. -Apply patterns for integration (events/services) and Identify patterns in code and refactor the code towards them where it increases understanding and/or maintainability with minimal guidance. -Follow the best practices of continuous BDD/TDD/Performance/Security/Smoke testing. -Work effectively with my product stakeholders to communicate and translate their needs into improvements in my product. -Certifications like Hortonworks / Cloudera Developer Certifications added an advantage -Excellent communication and presentation skills should demonstrate thought leadership and influence people -Strong computer science fundamentals logical thinking and reasoning -Participate in team ceremonies. -Support production systems resolve incidents and perform root cause analysis -Debug code and support/maintain the software solution. You will need Refer you will be responsible section Whats in it for you? At Tesco, we are committed to providing the best for you. As a result, our colleagues enjoy a unique, differentiated, market- competitive reward package, based on the current industry practices, for all the work they put into serving our customers, communities and planet a little better every day. Our Tesco Rewards framework consists of pillars - Fixed Pay, Incentives, and Benefits. Total Rewards offered at Tesco is determined by four principles - simple, fair, competitive, and sustainable. Salary - Your fixed pay is the guaranteed pay as per your contract of employment. Leave & Time-off - Colleagues are entitled to 30 days of leave (18 days of Earned Leave, 12 days of Casual/Sick Leave) and 10 national and festival holidays, as per the company's policy. Making Retirement Tension-FreeSalary - In addition to Statutory retirement beneets, Tesco enables colleagues to participate in voluntary programmes like NPS and VPF. Health is Wealth - Tesco promotes programmes that support a culture of health and wellness including insurance for colleagues and their family. Our medical insurance provides coverage for dependents including parents or in-laws. Mental Wellbeing - We offer mental health support through self-help tools, community groups, ally networks, face-to-face counselling, and more for both colleagues and dependents. Financial Wellbeing - Through our financial literacy partner, we offer one-to-one financial coaching at discounted rates, as well as salary advances on earned wages upon request. Save As You Earn (SAYE) - Our SAYE programme allows colleagues to transition from being employees to Tesco shareholders through a structured 3-year savings plan. Physical Wellbeing - Our green campus promotes physical wellbeing with facilities that include a cricket pitch, football field, badminton and volleyball courts, along with indoor games, encouraging a healthier lifestyle. About Us Tesco in Bengaluru is a multi-disciplinary team serving our customers, communities, and planet a little better every day across markets. Our goal is to create a sustainable competitive advantage for Tesco by standardising processes, delivering cost savings, enabling agility through technological solutions, and empowering our colleagues to do even more for our customers. With cross-functional expertise, a wide network of teams, and strong governance, we reduce complexity, thereby offering high-quality services for our customers. Tesco in Bengaluru, established in 2004 to enable standardisation and build centralised capabilities and competencies, makes the experience better for our millions of customers worldwide and simpler for over 3,30,000 colleagues Tesco Technology Today, our Technology team consists of over 5,000 experts spread across the UK, Poland, Hungary, the Czech Republic, and India. In India, our Technology division includes teams dedicated to Engineering, Product, Programme, Service Desk and Operations, Systems Engineering, Security & Capability, Data Science, and other roles. At Tesco, our retail platform comprises a wide array of capabilities, value propositions, and products, essential for crafting exceptional retail experiences for our customers and colleagues across all channels and markets. This platform encompasses all aspects of our operations – from identifying and authenticating customers, managing products, pricing, promoting, enabling customers to discover products, facilitating payment, and ensuring delivery. By developing a comprehensive Retail Platform, we ensure that as customer touchpoints and devices evolve, we can consistently deliver seamless experiences. This adaptability allows us to respond flexibly without the need to overhaul our technology, thanks to the creation of capabilities we have built. Show more Show less

Posted 2 months ago

Apply

6.5 - 12.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Role- GCP Data Engineer Location- Hyderabad/ Chennai/ Bangalore Exp-6.5 to 12 years Primary: GCP, Spark/ Pyspark, Data streaming, Python, SQL, ETL You'll need to have… Bachelor’s degree or four or more years of work experience. Six or more years of relevant work experience. Knowledge of Information Systems and their applications to data management processes. Experience performing detailed analysis of business problems and technical environments and designing the solution. Experience working with Google Cloud Platform & BigQuery. Experience working with Bigdata Technologies & Utilities - Hadoop/ Spark/ Scala/ Kafka/ NiFi. Experience with relational SQL and NoSQL databases. Experience with data pipeline and workflow management & Governance tools. Experience with stream-processing systems. Experience with object-oriented/object function scripting languages. Experience building data solutions for Machine learning and Artificial Intelligence. Knowledge of Data Analytics and modeling tools. Show more Show less

Posted 2 months ago

Apply

6.0 years

0 Lacs

Chandigarh, India

On-site

Experience Required: 6+ Years Key Responsibility: Design and implement modern data lakehouse architectures (Delta Lake, or equivalent) on cloud platforms like AWS, Azure, or GCP. Define data modeling, schema evolution, partitioning, and governance strategies for high-performance and secure data access. Own the technical roadmap for scalable data platform solutions, aligned with enterprise needs and future growth. Provide architectural guidance and code/design reviews across data engineering teams. Build and maintain reliable, high-throughput data pipelines for ingestion, transformation, and integration of structured, semi-structured, and unstructured data. Solid understanding of data warehousing concepts, ETL/ELT pipelines, and data modeling Experience with tools like Apache Spark (PySpark/Scala), Hive, DBT, and SQL for large-scale data transformation. Design ETL/ELT workflows using orchestration tools like Apache Airflow, Temporal, or Apache NiFi. Lead and mentor a team of data engineers and provide guidance on code quality, design principles, and best practices. Act as a data architecture SME, collaborate with DevOps, Data Scientists, Product Owners, and Business Analysts to understand data requirements and deliver aligned solutions. Show more Show less

Posted 2 months ago

Apply

3.0 years

0 Lacs

Kochi, Kerala, India

On-site

Job Description Job Description: Expert level skills in Java 11+ Spring boot(spring cloud, spring security, jpa) Strong knowledge of RESTful web services High proficiency with development tools and workflows (Junit, Maven, continuous workflow, etc.) Log4J SSO (single sign-on implementation), Maven, Junit, Sonar Experience in SQL, Mongo DB. Good to have: Apache NiFi, groovy scripting Requirements Knowledge of software development lifecycle or IT infrastructure. Good problem-solving and analytical skills. Strong communication and teamwork abilities. Willingness to learn and adapt in a fast-paced environment. Benefits Comprehensive Medical Insurance – Covers the entire family, ensuring health and well-being for employees and their loved ones. Hybrid Work Culture – Only 3 days in-office per week, offering flexibility and reducing commute stress. Healthy Work-Life Balance – Prioritizes employee well-being, allowing time for personal and professional growth. Friendly Working Environment – A supportive and collaborative workplace culture that fosters positivity and teamwork. No Variable Pay Deductions – Guaranteed full compensation with no unpredictable pay cuts, ensuring financial stability. Certification Reimbursement - Company will reimburse the money required to take certificates based on project demands Requirements 3+ years software development in related technologies Proficiency with fundamental front-end languages and frameworks: HTML, CSS, and JavaScript (jQuery and front-end frameworks such as React or Angular). Proficient with the following server-side technology: C#, .NET Core, ASP.NET MVC. Develop, test, and maintain RESTful APIs using .NET technologies, including ASP.NET Web API or ASP.NET Core. Design and implement database solutions using Microsoft SQL Server, including database schema design, stored procedures, functions, and triggers. Write efficient SQL queries to retrieve and manipulate data from databases. Implement authentication and authorization mechanisms for APIs using OAuth, JWT, or other security standards. Integrate APIs with third-party systems, services, and data sources. Collaborate with frontend developers to design API endpoints that meet the requirements of web and mobile applications. Optimize API performance and scalability by identifying and addressing bottlenecks. Ensure the security, reliability, and compliance of APIs with industry standards and best practices. Implement logging, monitoring, and error handling mechanisms to track and troubleshoot API usage and performance issues. Work closely with QA engineers to ensure the quality and reliability of API implementations. Keep abreast of emerging technologies and best practices in API development, .NET framework, and Microsoft SQL Server. Show more Show less

Posted 2 months ago

Apply

6.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Senior Data Scientist, Product Data & Analytics Senior Data Scientist, Product Data & Analytics Our Vision: Product Data & Analytics team builds internal analytic partnerships, strengthening focus on the health of the business, portfolio and revenue optimization opportunities, initiative tracking, new product development and Go-To Market strategies. We are a hands-on global team providing scalable end-to-end data solutions by working closely with the business. We influence decisions across Mastercard through data driven insights. We are a team on analytics engineers, data architects, BI developers, data analysts and data scientists, and fully manage our own data assets and solutions. Are you excited about Data Assets and the value they bring to an organization? Are you an evangelist for data driven decision making? Are you motivated to be part of a Global Analytics team that builds large scale Analytical Capabilities supporting end users across the continents? Are you interested in proactively looking to improve data driven decisions for a global corporation? Role Responsible for developing data-driven innovative scalable analytical solutions and identifying opportunities to support business and client needs in a quantitative manner and facilitate informed recommendations / decisions. Accountable for delivering high quality project solutions and tools within agreed upon timelines and budget parameters and conducting post- implementation reviews. Contributes to the development of custom analyses and solutions, derives insights from extracted data to solve critical business questions. Activities include developing and creating predictive models, behavioural segmentation frameworks, profitability analyses, ad hoc reporting, and data visualizations. Able to develop AI/ML capabilities, as needed on large volumes of data to support analytics and reporting needs across products, markets and services. Able to build end to end reusable, multi-purpose AI models to drive automated insights and recommendations. Leverage open and closed source technologies to solve business problems. Work closely with global & regional teams to architect, develop, and maintain advanced reporting and data visualization capabilities on large volumes of data to support analytics and reporting needs across products, markets, and services. Support initiatives in developing predictive models, behavioural segmentation frameworks, profitability analyses, ad hoc reporting, and data visualizations. Translates client/ stakeholder needs into technical analyses and/or custom solutions in collaboration with internal and external partners, derive insights and present findings and outcomes to clients/stakeholders to solve critical business questions. Create repeatable processes to support development of modelling and reporting Delegate and reviews work for junior level colleagues to ensure downstream applications and tools are not compromised or delayed. Serves as a mentor for junior-level colleagues, and develops talent via ongoing technical training, peer review etc. All About You 6-8 years of experience in data management, data mining, data analytics, data reporting, data product development and quantitative analysis. Advanced SQL skills, ability to write optimized queries for large data sets. Experience on Platforms/Environments: Cloudera Hadoop, Big data technology stack, SQL Server, Microsoft BI Stack, Cloud, Snowflake, and other relevant technologies. Data visualization tools (Tableau, Domo, and/or Power BI/similar tools) experience is a plus Experience with data validation, quality control and cleansing processes to new and existing data sources. Experience on Classical and Deep Machine Learning Algorithms like Logistic Regression, Decision trees, Clustering (K-means, Hierarchical and Self-organizing Maps), TSNE, PCA, Bayesian models, Time Series ARIMA/ARMA, Random Forest, GBM, KNN, SVM, Bayesian, Text Mining techniques, Multilayer Perceptron, Neural Networks - Feedforward, CNN, NLP, etc. Experience on Deep Learning algorithm techniques, open-source tools and technologies, statistical tools, and programming environments such as Python, R, and Big Data platforms such as Hadoop, Hive, Spark, GPU Clusters for deep learning. Experience in automating and creating data pipeline via tools such as Alteryx, SSIS. Nifi is a plus Financial Institution or a Payments experience a plus Additional Competencies Excellent English, quantitative, technical, and communication (oral/written) skills. Ownership of end-to-end Project Delivery/Risk Mitigation Virtual team management and manage stakeholders by influence Analytical/Problem Solving Able to prioritize and perform multiple tasks simultaneously Able to work across varying time zone. Strong attention to detail and quality Creativity/Innovation Self-motivated, operates with a sense of urgency. In depth technical knowledge, drive, and ability to learn new technologies. Must be able to interact with management, internal stakeholders Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must. Abide by Mastercard’s security policies and practices. Ensure the confidentiality and integrity of the information being accessed. Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. Corporate Security Responsibility All Activities Involving Access To Mastercard Assets, Information, And Networks Comes With An Inherent Risk To The Organization And, Therefore, It Is Expected That Every Person Working For, Or On Behalf Of, Mastercard Is Responsible For Information Security And Must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. R-244065 Show more Show less

Posted 2 months ago

Apply

5.0 - 8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Title: Senior Data Engineer Experience: 5 to 8 Years Location: Pune/Coimbatore (Hybrid) We’re looking for a skilled Big Data Engineer to design and build scalable data pipelines and infrastructure in a cloud environment, enabling analytics, AI/ML, and real-time insights. Key Responsibilities: Build and optimize ETL pipelines and data lake architectures Implement algorithms to transform raw data into insights Ensure data quality, governance, and security compliance Develop tools for data validation, monitoring, and reporting Prepare data for AI/ML and predictive modeling Must-Have Skills: Strong experience in Spark, Hadoop, Nifi, HDFS, Kafka Proficient in Scala/Java , SQL, and data pipeline design Knowledge of streaming & batch processing , observability, and Agile practices Hands-on with Delta Tables, Apache Ozone, Databricks, Axon, Spring Batch, Oracle DB Show more Show less

Posted 2 months ago

Apply

10.0 years

0 Lacs

Sahibzada Ajit Singh Nagar, Punjab, India

On-site

Job Description Job Title: Chief AI Officer (CAIO) Location: Mohali Reports To: CEO Exp: 10+ Years About RChilli RChilli is a global leader in HR Tech, delivering AI-driven solutions for resume parsing, data enrichment, and talent acquisition. We are looking for a visionary Chief AI Officer (CAIO) to drive AI strategy, innovation, and ethical AI deployment in HRTech. Key Responsibilities AI Strategy & Leadership Develop and execute RChilli’s AI strategy aligned with business goals. Ensure ethical AI implementation and compliance with industry regulations. Be a change leader in adopting AI across the company. AI-Driven Product Innovation Lead AI research & development for NLP, machine learning, and predictive analytics. Implement AI for automated job descriptions, resume scoring, and candidate recommendations. Oversee AI-powered chatbots, workforce planning, and predictive retention models. Identify opportunities for AI implementation, including: Automated calls for candidate screening, interview scheduling, and feedback collection. AI-powered report generation for HR analytics, performance tracking, and compliance. AI-based note-taking and meeting summarization for enhanced productivity. Technology & Infrastructure Define and implement a scalable AI roadmap. Manage AI infrastructure, data lakes, ETL processes, and automation. Oversee data lakes and ETL tools such as Airflow and NiFi for efficient data management. Ensure robust data engineering and analysis frameworks. Generative AI, Conversational AI & Transformative AI Apply Generative AI for automating job descriptions, resume parsing, and intelligent recommendations. Leverage Conversational AI for chatbots, virtual assistants, and AI-driven HR queries. Utilize Transformative AI for workforce planning, sentiment analysis, and predictive retention models. Tool Identification & Implementation Identify business requirements and assess third-party AI tools available in the market. Implement and integrate AI tools to enhance operations and optimize business processes. Business Integration & Operations Collaborate with cross-functional teams to integrate AI into HRTech solutions. Understand and optimize business processes for AI adoption. Align AI-driven processes with business efficiency and customer needs. Leadership & Talent Development Build and mentor an AI team, fostering a culture of innovation. Promote AI literacy across the organization. Industry Thought Leadership Represent RChilli in AI forums, conferences, and industry partnerships. Stay ahead of AI trends and HRTech advancements. Technical Skills Required Skills & Qualifications: Master’s/Ph.D. in Computer Science, AI, Data Science, or related field. 10+ years of experience in AI/ML, with 5+ years in leadership roles. Expertise in NLP, machine learning, deep learning, and predictive analytics. Experience in AI ethics, governance, and compliance frameworks. Strong proficiency in AI infrastructure, data engineering, and automation tools. Understanding of data lakes, ETL processes, Airflow, and NiFi tools. Clear concepts in data engineering and analysis. Leadership & Business Skills Strategic thinker with the ability to align AI innovation with business goals. Excellent communication and stakeholder management skills. Experience in building and leading AI teams. Why Join RChilli? Lead AI Innovation: Shape AI-driven HR solutions in a globally recognized HRTech company. Impactful Work: Drive AI transformations in HR operations and talent acquisition. Growth & Learning: Work with a passionate AI research and product team. Competitive Package: Enjoy a competitive salary, benefits, and career growth opportunities. If you are a visionary AI leader ready to transform HRTech, join RChilli as our Chief AI Officer. Show more Show less

Posted 2 months ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Requirements Description and Requirements Position Summary: A Big Data (Hadoop) Administrator responsible for supporting the installation, configuration, and maintenance of Cloudera Data Platform (CDP) and Cloudera Flow Management (CFM) streaming clusters on RedHat Linux. Strong expertise in DevOps practices, automation, and scripting (e.g . Ansible , Azure DevOps, Shell, Python ) to streamline operations and improve efficiency is highly valued. Job Responsibilities: Assist in the installation, configuration, and maintenance of Cloudera Data Platform (CDP) and Cloudera Flow Management (CFM) streaming clusters on RedHat Linux. Perform routine monitoring, troubleshooting, and issue resolution to ensure the stability and performance of Hadoop clusters. Develop and maintain scripts (e.g., Python, Bash, Ansible) to automate operational tasks and improve system efficiency. Collaborate with cross-functional teams, including application development, infrastructure, and operations, to support business requirements and implement new features. Implement and follow best practices for cluster security, including user access management and integration with tools like Apache Ranger and Kerberos. Support backup, recovery, and disaster recovery processes to ensure data availability and business continuity. Conduct performance tuning and optimization of Hadoop clusters to enhance system efficiency and reduce latency. Analyze logs and use tools like Splunk to debug and resolve production issues. Document operational processes, maintenance procedures, and troubleshooting steps to ensure knowledge sharing and consistency. Stay updated on emerging technologies and contribute to the adoption of new tools and practices to improve cluster management. Education: Bachelor’s degree in computer science, Information Systems, or another related field with 7+ years of IT and Infrastructure engineering work experience. Experience: 7+ Years Total IT experience & 4+ Years relevant experience in Big Data database. Technical Skills: Big Data Platform Management : Big Data Platform Management: Knowledge in managing and optimizing the Cloudera Data Platform, including components such as Apache Hadoop (YARN and HDFS), Apache HBase, Apache Solr , Apache Hive, Apache Kafka, Apache NiFi , Apache Ranger, Apache Spark, as well as JanusGraph and IBM BigSQL . Automation and Scripting : Expertise in automation tools and scripting languages such as Ansible, Python, and Bash to streamline operational tasks and improve efficiency. DevOps Practices : Proficiency in DevOps tools and methodologies, including CI/CD pipelines, version control systems (e.g., Git), and infrastructure-as-code practices. Monitoring and Troubleshooting : Experience with monitoring and observability tools such as Splunk, Elastic Stack, or Prometheus to identify and resolve system issues. Linux Administration : Solid knowledge of Linux operating systems, including system administration, troubleshooting, and performance tuning. Backup and Recovery : Familiarity with implementing and managing backup and recovery processes to ensure data availability and business continuity. Security and Access Management : Understanding of security best practices, including user access management and integration with tools like Kerberos. Agile Methodologies : Knowledge of Agile practices and frameworks, such as SAFe , with experience working in Agile environments. ITSM Tools : Familiarity with ITSM processes and tools like ServiceNow for incident and change management. About MetLife Recognized on Fortune magazine's list of the 2024 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible . Join us! Show more Show less

Posted 2 months ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Greetings from TCS!!! TCS has been a great pioneer in feeding the fire of young Techies like you. We are a global leader in the technology arena and there’s nothing that can stop us from growing together. Your role is of key importance, as it lays down the foundation for the entire project. Position: QA-Big Data Job Location: Bangalore / Pune Experience: 8+ Interview Mode: Virtual (MS Teams) JD: Job Title: Manual/ Automation QA Tester - Spark, Scala, and Apache NiFi, Java Key Responsibilities: Develop, execute, and maintain automated and Manual test scripts for data processing pipelines built using Apache Spark and Apache NiFi. Perform end-to-end testing of ETL processes, including data extraction, transformation, and loading. Design and implement test plans, test cases, and test data for functional, regression, and performance testing. Collaborate with developers, data engineers, and product managers to understand requirements, identify test scenarios, and ensure test coverage. Analyze test results, identify defects, and work closely with the development team to troubleshoot and resolve issues. Monitor, report, and track the quality of data processing and ETL jobs in a big data environment. Preferred Qualifications: Experience with Big Data technologies such as Spark Knowledge of any other programming languages like Python or Java. Experience with performance testing tools and techniques for data processing workloads. Familiarity with cloud platforms such as AWS, Azure, or Google Cloud. Work Location India: Bangalore / Pune TCS Eligibility Criteria: *BE/B.tech/MCA/M.Sc./MS with minimum 3 years of relevant IT-experience post Qualification. *Only Full-Time courses would be considered. *Candidates who have attended TCS interview within 1 month need not apply. Referrals are always welcome!!! Thanks & Regards Jerin L Varghese Show more Show less

Posted 2 months ago

Apply

7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Requirements Description and Requirements Position Summary: A Big Data (Hadoop) Administrator responsible for supporting the installation, configuration, and maintenance of Cloudera Data Platform (CDP) and Cloudera Flow Management (CFM) streaming clusters on RedHat Linux. Strong expertise in DevOps practices, automation, and scripting (e.g . Ansible , Azure DevOps, Shell, Python ) to streamline operations and improve efficiency is highly valued. Job responsibilities: Assist in the installation, configuration, and maintenance of Cloudera Data Platform (CDP) and Cloudera Flow Management (CFM) streaming clusters on RedHat Linux. Perform routine monitoring, troubleshooting, and issue resolution to ensure the stability and performance of Hadoop clusters. Develop and maintain scripts (e.g., Python, Bash, Ansible) to automate operational tasks and improve system efficiency. Collaborate with cross-functional teams, including application development, infrastructure, and operations, to support business requirements and implement new features. Implement and follow best practices for cluster security, including user access management and integration with tools like Apache Ranger and Kerberos. Support backup, recovery, and disaster recovery processes to ensure data availability and business continuity. Conduct performance tuning and optimization of Hadoop clusters to enhance system efficiency and reduce latency. Analyze logs and use tools like Splunk to debug and resolve production issues. Document operational processes, maintenance procedures, and troubleshooting steps to ensure knowledge sharing and consistency. Stay updated on emerging technologies and contribute to the adoption of new tools and practices to improve cluster management. Education: Bachelor’s degree in computer science, Information Systems, or another related field with 7+ years of IT and Infrastructure engineering work experience. Experience: 7+ Years Total IT experience & 4+ Years relevant experience in Big Data database. Big Data Platform Management : Big Data Platform Management: Knowledge in managing and optimizing the Cloudera Data Platform, including components such as Apache Hadoop (YARN and HDFS), Apache HBase, Apache Solr , Apache Hive, Apache Kafka, Apache NiFi , Apache Ranger, Apache Spark, as well as JanusGraph and IBM BigSQL . Automation and Scripting : Expertise in automation tools and scripting languages such as Ansible, Python, and Bash to streamline operational tasks and improve efficiency. DevOps Practices : Proficiency in DevOps tools and methodologies, including CI/CD pipelines, version control systems (e.g., Git), and infrastructure-as-code practices. Monitoring and Troubleshooting : Experience with monitoring and observability tools such as Splunk, Elastic Stack, or Prometheus to identify and resolve system issues. Linux Administration : Solid knowledge of Linux operating systems, including system administration, troubleshooting, and performance tuning. Backup and Recovery : Familiarity with implementing and managing backup and recovery processes to ensure data availability and business continuity. Security and Access Management : Understanding of security best practices, including user access management and integration with tools like Kerberos. Agile Methodologies : Knowledge of Agile practices and frameworks, such as SAFe , with experience working in Agile environments. ITSM Tools : Familiarity with ITSM processes and tools like ServiceNow for incident and change management. About MetLife Recognized on Fortune magazine's list of the 2024 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible . Join us! Show more Show less

Posted 2 months ago

Apply

4.0 years

0 Lacs

Gurugram, Haryana, India

On-site

About the Role: As a Big Data Engineer, you will play a critical role in integrating multiple data sources, designing scalable data workflows, and collaborating with data architects, scientists, and analysts to develop innovative solutions. You will work with rapidly evolving technologies to achieve strategic business goals. Must-Have Skills: 4+ year’s of mandatory experience with Big data. 4+ year’s mandatory experience in Apache Spark. Proficiency in Apache Spark, Hive on Tez, and Hadoop ecosystem components. Strong coding skills in Python & PySpark. Experience building reusable components or frameworks using Spark Expertise in data ingestion from multiple sources using APIs, HDFS, and NiFi. Solid experience working with structured, unstructured, and semi-structured data formats (Text, JSON, Avro, Parquet, ORC, etc.). Experience with UNIX Bash scripting and databases like Postgres, MySQL and Oracle. Ability to design, develop, and evolve fault-tolerant distributed systems. Strong SQL skills, with expertise in Hive, Impala, Mongo and NoSQL databases. Hands-on with Git and CI/CD tools Experience with streaming data technologies (Kafka, Spark Streaming, Apache Flink, etc.). Proficient with HDFS, or similar data lake technologies Excellent problem-solving skills — you will be evaluated through coding rounds Key Responsibilities: Must be capable of handling existing or new Apache HDFS cluster having name node, data node & edge node commissioning & decommissioning. Work closely with data architects and analysts to design technical solutions. Integrate and ingest data from multiple source systems into big data environments. Develop end-to-end data transformations and workflows, ensuring logging and recovery mechanisms. Must be able to troubleshoot spark job failures. Design and implement batch, real-time, and near-real-time data pipelines. Optimize Big Data transformations using Apache Spark, Hive, and Tez Work with Data Science teams to enhance actionable insights. Ensure seamless data integration and transformation across multiple systems. Show more Show less

Posted 2 months ago

Apply

3.0 - 8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Greetings from TCS!!! TCS has been a great pioneer in feeding the fire of young Techies like you. We are a global leader in the technology arena and there’s nothing that can stop us from growing together. Your role is of key importance, as it lays down the foundation for the entire project. Job Title: : Big Data Developer Job Location: Pune Experience: 4-7 Must have skillset: Pyspark, Scala, NiFi ,Hadoop High level job description for LVS exit: Professional Bigdata Hadoop development experience between 3-8 years is preferred. Expertise with Big Data ecosystem services, such as Spark(Scala/Python), Hive, Kafka, Unix and experience with any cloud stack, preferably GCP(Big Query & DataProc) Experience in working with large cloud data lakes. Experience with large-scale data processing, complex event processing, stream processing. Experience in working with CI/CD pipelines, source code repositories, and operating environments. Experience in working with both structured and unstructured data, with a high degree of SQL knowledge. Experience designing and implementing scalable ETL/ELT processes and modeling data for low latency reporting Experience in performance tuning, troubleshooting and diagnostics, process monitoring, and profiling. Understanding containerization, virtualization, and cloud computing TCS Eligibility Criteria: *BE/B.tech/MCA/M.Sc./MS with minimum 3 years of relevant IT-experience post Qualification. *Only Full-Time courses would be considered. *Candidates who have attended TCS interview within 1 month need not apply. Referrals are always welcome!!! Thanks & Regards Kavya T Talent Acquisition Associate Tata Consultancy Services Show more Show less

Posted 2 months ago

Apply

2.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Who Are We Fulcrum Digital is an agile and next-generation digital accelerating company providing digital transformation and technology services right from ideation to implementation. These services have applicability across a variety of industries, including banking & financial services, insurance, retail, higher education, food, health care, and manufacturing. The Role Plan, manage, and oversee all aspects of a Production Environment for Big Data Platforms. Define strategies for Application Performance Monitoring, Optimization in Prod environment. Respond to Incidents and improvise platform based on feedback and measure the reduction of incidents over time. Ensures that batch production scheduling and process are accurate and timely. Able to create and execute queries to big data platform and relational data tables to identify process issues or to perform mass updates, preferred. Performs ad hoc requests from users such as data research, file manipulation/transfer, research of process issues, etc. Take a holistic approach to problem-solving, by connecting the dots during a production event through the various technology stack that makes up the platform, to optimize meantime to recover. Engage in and improve the whole lifecycle of servicesfrom inception and design, through deployment, operation and refinement. Analyze ITSM activities of the platform and provide feedback loop to development teams on operational gaps or resiliency concerns. Support services before they go live through activities such as system design consulting, capacity planning, and launch reviews. Support the application CI/CD pipeline for promoting software into higher environments through validation and operational gating, and lead in DevOps automation and best practices. Maintain services once they are live by measuring and monitoring availability, latency, and overall system health. Scale systems sustainably through mechanisms like automation and evolving systems by pushing for changes that improve reliability and velocity. Work with a global team spread across tech hubs in multiple geographies and time zones. Ability to share knowledge and explain processes and procedures to others. Requirements Experience in Linux and Knowledge on ITSM/ITIL. Experience in the Big Data technologies (Hadoop, Spark, Nifi, Impala). 2 years of Experience in running Big Data production systems. Good to have experience in industry standard CI/CD tools like Git/BitBucket, Jenkins, Maven. Solid grasp of SQL or Oracle fundamentals. Experience with scripting, pipeline management, and software design. Systematic problem-solving approach, coupled with strong communication skills and a sense of ownership and drive. Ability to help debug and optimize code and automate routine tasks. Ability to support many different stakeholders. Experience in dealing with difficult situations and making decisions with a sense of urgency is needed. Appetite for change and pushing the boundaries of what can be done with automation. Experience in working across development, operations, and product teams to prioritize needs and to build relationships are a must. Experience designing and implementing an effective and efficient CI/CD flow that gets code from dev to prod with high quality and minimal manual effort is desired. Good Handle on Change Management and Release Management aspects of Software. Locations - Pune, India Show more Show less

Posted 2 months ago

Apply

5.0 - 8.0 years

15 - 18 Lacs

Coimbatore

Hybrid

Role & responsibilities Designing and building optimized data pipelines using cutting-edge technologies in a cloud environment to drive analytical insights. Constructing infrastructure for efficient ETL processes from various sources and storage systems. Leading the implementation of algorithms and prototypes to transform raw data into useful information. Architecting, designing, and maintaining database pipeline architectures, ensuring readiness for AI/ML transformations. Creating innovative data validation methods and data analysis tools. Ensuring compliance with data governance and security policies. Interpreting data trends and patterns to establish operational alerts. Developing analytical tools, programs, and reporting mechanisms. Conducting complex data analysis and presenting results effectively. Preparing data for prescriptive and predictive modeling. Continuously exploring opportunities to enhance data quality and reliability. Applying strong programming and problem-solving skills to develop scalable solutions.

Posted 2 months ago

Apply

12.0 - 18.0 years

20 - 30 Lacs

Bengaluru

Work from Office

Role: Senior Manager Exp: 12+ yrs Budget: Max 30LPA Bangalore Imm Joiners Graduation: BE, Btech, ME, Mtech Exposure to Spring, NiFi, Kafka, Postgres, Elasticsearch, Java, Ansible, Postgres,Angular, Node JS, Python, React, Mongodb,CI/CD DevOps.

Posted 2 months ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Software Development Engineer II Software Development Engineer (Data Engineering) Overview Mastercard is the global technology company behind the world’s fastest payments processing network. We are a vehicle for commerce, a connection to financial systems for the previously excluded, a technology innovation lab, and the home of Priceless®. We ensure every employee has the opportunity to be a part of something bigger and to change lives. We believe as our company grows, so should you. We believe in connecting everyone to endless, priceless possibilities. Enterprise Data Solution (EDS) is focused on enabling insights into Mastercard network and help build data-driven products by curating and preparing data in a secure and reliable manner. Moving to a “Unified and Fault-Tolerant Architecture for Data Ingestion and Processing” is critical to achieving this mission. As a Software Development Engineer (Data Engineering) in Enterprise Data Solution (EDS), you will have the opportunity to build high performance data pipelines to load into Mastercard Data Warehouse. Our Data Warehouse provides analytical capabilities to number of business users who help different customers provide answer to their business problems through data. You will play a vital role within a rapidly growing organization, while working closely with experienced and driven engineers to solve challenging problems. Role Participant medium-to-large size data engineering projects Discover, ingest, and incorporate new sources of real-time, streaming, batch, and API-based data into our platform to enhance the insights we get from running tests and expand the ways and properties on which we can test Assist business in utilizing data-driven insights to drive growth and transformation. Build and maintain data processing workflows feeding Mastercard analytics domains. Facilitate reliable integrations with internal systems and third-party API's as needed. Support data analysts as needed, advising on data definitions and helping them derive meaning from complex datasets. Work with cross functional agile teams to drive projects through full development cycle. Help the team improve with the usage of data engineering best practices. Collaborate with other data engineering teams to improve the data engineering ecosystem and talent within Mastercard. All About You At least Bachelor's degree in Computer Science, Computer Engineering or Technology related field or equivalent work experience Experience in Data Warehouse related projects in product or service based organization Expertise in Data Engineering and implementing multiple end-to-end DW projects in Big Data environment Experience of working with Databases like Oracle, Netezza and have strong SQL knowledge Additional experience of building data pipelines through Spark with Scala/Python/Java on Hadoop is preferred Experience of working on Nifi will be an added advantage Experience of working in Agile teams Strong analytical skills required for debugging production issues, providing root cause and implementing mitigation plan Strong communication skills - both verbal and written – and strong relationship, collaboration skills and organizational skills Ability to be high-energy, detail-oriented, proactive and able to function under pressure in an independent environment along with a high degree of initiative and self-motivation to drive results Ability to quickly learn and implement new technologies, and perform POC to explore best solution for the problem statement Flexibility to work as a member of a matrix based diverse and geographically distributed project teams Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. R-246732 Show more Show less

Posted 2 months ago

Apply

1.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Velotio Technologies is a product engineering company working with innovative startups and enterprises. We are a certified Great Place to Work® and recognized as one of the best companies to work for in India. We have provided full-stack product development for 110+ startups across the globe building products in the cloud-native, data engineering, B2B SaaS, IoT & Machine Learning space. Our team of 400+ elite software engineers solves hard technical problems while transforming customer ideas into successful products. Requirements Design and build scalable data infrastructure with efficiency, reliability, and consistency to meet rapidly growing data needs Build the applications required for optimal extraction, cleaning, transformation, and loading data from disparate data sources and formats using the latest big data technologies Building ETL/ELT pipelines and work with other data infrastructure components, like Data Lakes, Data Warehouses and BI/reporting/analytics tools Work with various cloud services like AWS, GCP, Azure to implement highly available, horizontally scalable data processing and storage systems and automate manual processes and workflows Implement processes and systems to monitor data quality, to ensure data is always accurate, reliable, and available for the stakeholders and other business processes that depend on it Work closely with different business units and engineering teams to develop a long-term data platform architecture strategy and thus foster data-driven decision-making practices across the organization Help establish and maintain a high level of operational excellence in data engineering Evaluate, integrate, and build tools to accelerate Data Engineering, Data Science, Business Intelligence, Reporting, and Analytics as needed Focus on building test-driven development by writing unit/integration tests Contribute to design documents and engineering wiki You will enjoy this role if you... Like building elegant well-architected software products with enterprise customers Want to learn to leverage public cloud services & cutting-edge big data technologies, like Spark, Airflow, Hadoop, Snowflake, and Redshift Work collaboratively as part of a close-knit team of geeks, architects, and leads Desired Skills & Experience: 1+ years of data engineering or equivalent knowledge and ability 1+ years software engineering or equivalent knowledge and ability Strong proficiency in at least one of the following programming languages: Python, Scala, or Java Experience designing and maintaining at least one type of database (Object Store, Columnar, In-memory, Relational, Tabular, Key-Value Store, Triple-store, Tuple-store, Graph, and other related database types) Good understanding of star/snowflake schema designs Extensive experience working with big data technologies like Spark, Hadoop, Hive Experience building ETL/ELT pipelines and working on other data infrastructure components like BI/reporting/analytics tools Experience working with workflow orchestration tools like Apache Airflow, Oozie, Azkaban, NiFi, Airbyte, etc. Experience building production-grade data backup/restore strategies and disaster recovery solutions Hands-on experience with implementing batch and stream data processing applications using technologies like AWS DMS, Apache Flink, Apache Spark, AWS Kinesis, Kafka, etc. Knowledge of best practices in developing and deploying applications that are highly available and scalable Experience with or knowledge of Agile Software Development methodologies Excellent problem-solving and troubleshooting skills Process-oriented with excellent documentation skills Bonus points if you: Have hands-on experience using one or multiple cloud service providers like AWS, GCP, Azure and have worked with specific products like EMR, Glue, DataProc, DataBricks, DataStudio, etc Have hands-on experience working with either Redshift, Snowflake, BigQuery, Azure Synapse, or Athena and understand the inner workings of these cloud storage systems Have experience building DataLakes, scalable data warehouses, and DataMarts Have familiarity with tools like Jupyter Notebooks, Pandas, NumPy, SciPy, sci-kit learn, Seaborn, SparkML, etc. Have experience building and deploying Machine Learning models to production at scale Possess excellent cross-functional collaboration and communication skills Our Culture : We have an autonomous and empowered work culture encouraging individuals to take ownership and grow quickly Flat hierarchy with fast decision making and a startup-oriented “get things done” culture A strong, fun & positive environment with regular celebrations of our success. We pride ourselves in creating an inclusive, diverse & authentic environment At Velotio, we embrace diversity. Inclusion is a priority for us, and we are eager to foster an environment where everyone feels valued. We welcome applications regardless of ethnicity or cultural background, age, gender, nationality, religion, disability or sexual orientation. Show more Show less

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies