Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
SailPoint is the leader in identity security for the cloud enterprise. Our identity security solutions secure and enable thousands of companies worldwide, giving our customers unmatched visibility into the entirety of their digital workforce, ensuring workers have the right access to do their job – no more, no less. Built on a foundation of AI and ML, our Identity Security Cloud Platform delivers the right level of access to the right identities and resources at the right time—matching the scale, velocity, and changing needs of today’s cloud-oriented, modern enterprise. About the role: Want to be on a team that full of results-driven individuals who are constantly seeking to innovate? Want to make an impact? At SailPoint, our Data Platform team does just that. SailPoint is seeking a Senior Data/Software Engineer to help build robust data ingestion and processing system to power our data platform. We are looking for well-rounded engineers who are passionate about building and delivering reliable, scalable data pipelines. This is a unique opportunity to build something from scratch but have the backing of an organization that has the muscle to take it to market quickly, with a very satisfied customer base. Responsibilities : Spearhead the design and implementation of ELT processes, especially focused on extracting data from and loading data into various endpoints, including RDBMS, NoSQL databases and data-warehouses. Develop and maintain scalable data pipelines for both stream and batch processing leveraging JVM based languages and frameworks. Collaborate with cross-functional teams to understand diverse data sources and environment contexts, ensuring seamless integration into our data ecosystem. Utilize AWS service-stack wherever possible to implement lean design solutions for data storage, data integration and data streaming problems. Develop and maintain workflow orchestration using tools like Apache Airflow. Stay abreast of emerging technologies in the data engineering space, proactively incorporating them into our ETL processes. Thrive in an environment with ambiguity, demonstrating adaptability and problem-solving skills. Qualifications : BS in computer science or a related field. 5+ years of experience in data engineering or related field. Demonstrated system-design experience orchestrating ELT processes targeting data Must be willing to work 4 overlapping hours with US timezone. will work closely with US based managers and engineers Hands-on experience with at least one streaming or batch processing framework, such as Flink or Spark. Hands-on experience with containerization platforms such as Docker and container orchestration tools like Kubernetes. Proficiency in AWS service stack. Experience with DBT, Kafka, Jenkins and Snowflake. Experience leveraging tools such as Kustomize, Helm and Terraform for implementing infrastructure as code. Strong interest in staying ahead of new technologies in the data engineering space. Comfortable working in ambiguous team-situations, showcasing adaptability and drive in solving novel problems in the data-engineering space. Preferred Experience with AWS Experience with Continuous Delivery Experience instrumenting code for gathering production performance metrics Experience in working with a Data Catalog tool ( Ex: Atlan / Alation ) What success looks like in the role Within the first 30 days you will: Onboard into your new role, get familiar with our product offering and technology, proactively meet peers and stakeholders, set up your test and development environment. Seek to deeply understand business problems or common engineering challenges and propose software architecture designs to solve them elegantly by abstracting useful common patterns. By 90 days: Proactively collaborate on, discuss, debate and refine ideas, problem statements, and software designs with different (sometimes many) stakeholders, architects and members of your team. Take a committed approach to prototyping and co-implementing systems alongside less experienced engineers on your team—there’s no room for ivory towers here. By 6 months: Collaborates with Product Management and Engineering Lead to estimate and deliver small to medium complexity features more independently. Occasionally serve as a debugging and implementation expert during escalations of systems issues that have evaded the ability of less experienced engineers to solve in a timely manner. Share support of critical team systems by participating in calls with customers, learning the characteristics of currently running systems, and participating in improvements. SailPoint is an equal opportunity employer and we welcome everyone to our team. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or veteran status. Show more Show less
Posted 5 days ago
4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
I am thrilled to share an exciting opportunity with one of our esteemed clients! 🚀 Join me in exploring new horizons and unlocking potential. If you're ready for a challenge and growth,. Exp: 7yrs Location: Chennai/Hyderabad/Coimbatore immediate to 30days JD: Bachelor’s or Master’s degree in Computer Science, Information Systems, or a related field. 4+ years of experience in-depth data engineering, with at least 1+ minimum year(s) of dedicated experience engineering solutions in an enterprise scale Snowflake environment. Tactical expertise in ANSI SQL, performance tuning, and data modeling techniques . Strong experience with cloud platforms (preference to Azure) and their data services. Experience in ETL/ELT development using tools such as Azure Data Factory, dbt, Matillion, Talend, or Fivetran. Hands-on experience with scripting languages like Python for data processing. Snowflake SnowPro certification ; preference to the engineering course path. Experience with CI/CD pipelines, DevOps practices, and Infrastructure as Code (IaC). Knowledge of streaming data processing frameworks such as Apache Kafka or Spark Streaming. Familiarity with BI and visualization tools such as PowerBI Regards R Usha usha@livecjobs.com Show more Show less
Posted 5 days ago
7.0 years
0 Lacs
Kochi, Kerala, India
On-site
Greetings from TCS Recruitment Team! Role: DATABRICKS LEAD/ DATABRICKS SOLUTION ARCHITECT/ DATABRICKS ML ENGINEER Years of experience: 7 to 18 Years Walk-In-Drive Location: Kochi Walk-in-Location Details: Tata Consultancy Services TCS Centre SEZ Unit, Infopark Kochi Phase 1, Infopark Kochi P.O, Kakkanad, Kochi - 682042, Kerala India Drive Time: 9 am to 1:00 PM Date: 21-Jun-25 Must have 5+ years of experience in data engineering or related fields At least 2-3 years of hands-on experience with Databricks (using Apache Spark, Delta Lake, etc.) Solid experience in working with big data technologies such as Hadoop, Spark, Kafka, or similar Experience with cloud platforms (AWS, Azure, or GCP) and cloud-native data tools Experience with machine learning frameworks and pipelines, particularly in Databricks. Experience with AI/ML model deployment, MLOps, and ML lifecycle management using Databricks and related tools. Regards Sundar V Show more Show less
Posted 5 days ago
6.0 - 8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Key Attributes: Adaptability & Agility Thrive in a fast-paced, ever-evolving environment with shifting priorities. Demonstrated ability to quickly learn and integrate new technologies and frameworks. Strong problem-solving mindset with the ability to juggle multiple priorities effectively. Core Responsibilities Design, develop, test, and maintain robust Python applications and data pipelines using Python/Pyspark. Define and implement smart data pipelines from RDBMS to Graph Databases . Build and expose APIs using AWS Lambda and ECS-based microservices . Collaborate with cross-functional teams to define, design, and deliver new features. Write clean, efficient, and scalable code following best practices. Troubleshoot, debug, and optimise applications for performance and reliability. Contribute to the setup and maintenance of CI/CD pipelines and deployment workflows if required. Ensure security, compliance, and observability across all development activities. All you need is... Required Skills & Experience Expert-level proficiency in Python with a strong grasp of Object oriented & functional programming. Solid experience with SQL and graph databases (e.g., Neo4j, Amazon Neptune). Hands-on experience with cloud platforms – AWS and/or Azure is a must. Proficiency in PySpark or similar data ingestion and processing frameworks. Familiarity with DevOps tools such as Docker, Kubernetes, Jenkins, and Git. Strong understanding of CI/CD, version control, and agile development practices. Excellent communication and collaboration skills. Desirable Skills Experience with Agentic AI, machine learning, or LLM-based systems. Familiarity with Apache Iceberg or similar modern data lakehouse formats. Knowledge of Infrastructure as Code (IaC) tools like Terraform or Ansible. Understanding of microservices architecture and distributed systems. Exposure to observability tools (e.g., Prometheus, Grafana, ELK stack). Experience working in Agile/Scrum environments. Minimum Qualifications 6 to 8 years of hands-on experience in Python development and data engineering. Demonstrated success in delivering production-grade software and scalable data solutions. Show more Show less
Posted 5 days ago
2.0 - 6.0 years
9 - 13 Lacs
Pune
Work from Office
Our Purpose Title and Summary Senior Software Engineer Overview Prepaid Management Services is the division of MasterCard that concentrates on Prepaid Solutions such as our Multi-Currency CashPassport product. Traditionally focused on the Travel sector this business unit is driving forward Prepaid throughout the world with innovative and leading solutions that we integrate with Global brands. This role is within the Global Technology Services (GTS) team which is part of the wider MasterCard Operations & Technologies group. We provide high quality evolutionary and operational capabilities to support the MasterCard Prepaid Management Services business. A hands-on software engineer working within multi-disciplined, agile teams. Responsible for producing high-quality, appropriate web-based solutions for both internal and external customers in Prepaid Management Services. Responsible for the development produced by the Scrum team, ensuring all development work adheres to design and development standards, guidelines and roadmap defined by the Prepaid Management Services Leadership and Architecture teams. Contributes to the coding and testing of new development and changes, ensuring the code is maintainable and to a high standard of quality. Have you worked in application development in a fast paced Agile environment? Are you passionate about providing technology driven solutions to address business needs? Can you provide innovative ideas and ensures continuous improvement as part of day to day work? Role Key Responsibilities Take a participatory role in sprint planning, daily stand-ups, demonstrations and retrospectives. Analyse current processes and systems to produce designs that can be scaled and evolved. Can translate a technical design to implemented code. Responsible for the development of readable and maintainable code, and appropriate unit tests. Actively seeks to minimize code and simplify architecture. Support test and build automation. Produce high-level and detailed estimates. Adheres to the development process and suggests improvements where appropriate. Ensures individual and team tasks are performed on-time by communicating and working closely with other members of the team. Retains a focus on completion, identifying and resolving issues. Produces technical design documentation as required and ensures work complies with the architectural roadmap. Identifies and updates existing design documents impacted due to changes All About You Key Skills and Experience Required Good experience in JAVA development (JAVA 6+, Hibernate, Spring, Spring boot, WebServices (REST and SOAP), Eclipse, Apache), Angular JS, Scala Good database development knowledge (Oracle v10+, PLSQL) Experience with test-driven development practices and technologies, e.g. JUnit, Maven, etc. Experience with CI/CD using Jenkins pipeline Experience with Agile development methods Experience with version control systems (GIT), and CI tools (Jenkins, Fortify, Sonar) Strong oral and written communication skills Desirable SDLC support tools (ALM, Confluence, Selenium, Sharepoint) Code packaging and deployment automation Financial services experience (Cards/PCI) Personal Qualities Flexible Creative Excellent problem solving skills Good communicator Self-starter Leadership ability
Posted 5 days ago
2.0 - 7.0 years
11 - 16 Lacs
Pune
Work from Office
Our Purpose Title and Summary Lead Software Engineer in Test Overview: Mastercard is a technology company in the Global Payments Industry. We operate the world s fastest payments processing network, connecting consumers, financial institutions, merchants, governments and businesses in more than 210 countries and territories. Mastercard products and solutions make everyday commerce activities - such as shopping, travelling, running a business and managing finances - easier, more secure and more efficient for everyone. MasterCard is seeking talented individuals to join our Digital team in Pune, India. MasterCard is researching and developing the next generation of products and services to enable consumers to securely, efficiently, and intelligently conduct transactions regardless of channel. Whether through traditional retail, mobile, or e-commerce, MasterCard innovation is leading the digital convergence of traditional and emerging payments technologies across a wide variety of new devices and services. Join our team and help shape the future of connected commerce! Job Overview: In an exciting and fast-paced environment focused on developing payment authentication and security solutions, this position offers technical leadership and expertise throughout the development lifecycle of the ecommerce payment authentication platform under the Authentication program for Digital Authentication Services. Role We are looking for an Automation Tester to join the DAS team. This is a pivotal role, responsible for QA, Loading Testing and Automation various data-driven pipelines. The position involves managing testing infrastructure for Functional test, Automation and co-ordination of testing that spans multiple programs and projects. The ideal candidate will have experience working with large-scale data and automation testing of Java, Cloud Native application/services. Position will lead the development and maintenance of automated testing frameworks Provide technical leadership for new major initiatives Deliver innovative, cost-effective solutions which align to enterprise standards Drive the reduction of time spent testing Work to minimize manual testing by identifying high-ROI test cases and automating them Be an integrated part of an Agile engineering team, working interactively with software engineer leads, architects, testing engineers, and product managers from the beginning of the development cycle Help ensure functionality delivered in each release is fully tested end to end Manage multiple priorities and tasks in a dynamic work environment All About You Bachelor s degree in computer science or equivalent work experience with hands on technical and quality engineering skills Expertise in testing methods, standards, and conventions including automation and test case creation Excellent technical acumen, strong organizational and problem-solving skills with great attention to detail, critical thinking, solid communication, and proven leadership skills Solid leadership and mentoring skills with the ability to drive change Experience in designing and building testing Automation Frameworks. Expert in API Testing Experience in UI and Mobile automation and testing against different browsers & devices. Knowledge of Java, SQLs, REST APIs, code reviews, scanning tools and configuration, and branching techniques Experience with application monitoring tools such as Dynatrace and Splunk Experience with Performance Testing Experience with DevOps practices (continuous integration and delivery, and tools such as Jenkins) Nice to have knowledge or prior experience with any of the following Apache Kafka Microservices architecture Build tools like Jenkins Corporate Security Responsibility Every person working for, or on behalf of, Mastercard is responsible for information security. All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and therefore, it is expected that the successful candidate for this position must: Abide by Mastercard s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard s guidelines.
Posted 5 days ago
4.0 - 6.0 years
10 - 14 Lacs
Hyderabad
Work from Office
Zeta is a Next-Gen Banking Tech company that empowers banks and fintechs to launch banking products for the future. It was founded by Bhavin Turakhia and Ramki Gaddipati in 2015.Our f lagship processing platform - Zeta Tachyon - is the industry s first modern, cloud-native, and fully API-enabled stack that brings together issuance, processing, lending, core banking, fraud risk, and many more capabilities as a single-vendor stack. 15M+ cards have been issued on our platform globally. Zeta is actively working with the largest Banks and Fintechs in multiple global markets transforming customer experience for multi-million card portfolios. Zeta has over 1,700+ employees across the US, EMEA, and Asia, with 70%+ roles in RD . Backed by SoftBank, Mastercard, and other investors , we raised $330M at a $2B valuation in 2025 . Learn more @ www.zeta.tech , careers.zeta.tech , Linkedin , Twitter About the Role In this role, you ll design robust data models using SQL, DBT and Redshift, while driving best practices across development, deployment, and monitoring. Youll also collaborate closely with product and engineering to ensure data quality and impactful delivery. Responsibilities Create optimised data models with SQL, DBT and Redshift Write functional and column level test for Models Build reports from the data models Collaborate with product to clarify requirement and create design document Get design reviewed from Architect/Principal/Lead Engineer Contribute to code reviews Set up and monitor Airflow DAGs Set up and use CI/CD pipelines Leverage Kubernetes operators for deployment automation Ensure data quality Drive best practices in Data models development, deployment, and monitoring Mentor colleagues and contribute to team growth Skills Bachelor s/Master s degree in engineering In-depth expertise in SQL and Python programming. Strong expertise in SQL for complex data querying and optimization Hands-on experience with Apache Airflow for orchestration and scheduling Solid understanding of data modeling and data warehousing concepts Experience with dbt (Data Build Tool) for data transformation and modeling Exposure to Amazon Redshift or other cloud data warehouses Familiarity with CI/CD tools such as Jenkins Experience using Bitbucket for version control Monitoring and alerting using Grafana and Prometheus Working knowledge of JIRA for agile project tracking Familiarity with Kubernetes for deployment automation and orchestration Experience and Qualifications 4-6 years of relevant experience in data engineering . Bachelor s/Master s degree in engineering (computer science, information systems) Equal Opportunity Zeta is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. We encourage applicants from all backgrounds, cultures, and communities to apply and believe that a diverse workforce is key to our success.
Posted 5 days ago
2.0 - 7.0 years
12 - 13 Lacs
Bengaluru
Work from Office
Location(s): Quay Building 8th Floor, Bagmane Tech Park, Bengaluru, IN Line Of Business: Data Estate(DE) Job Category: Engineering Technology Experience Level: Experienced Hire At Moodys, we unite the brightest minds to turn today s risks into tomorrow s opportunities. We do this by striving to create an inclusive environment where everyone feels welcome to be who they are-with the freedom to exchange ideas, think innovatively, and listen to each other and customers in meaningful ways. If you are excited about this opportunity but do not meet every single requirement, please apply! You still may be a great fit for this role or other open roles. We are seeking candidates who model our values: invest in every relationship, lead with curiosity, champion diverse perspectives, turn inputs into actions, and uphold trust through integrity. Skills and Competencies Proficiency in Kubernetes and Amazon EKS (2+ years required): Essential for managing containerized applications and ensuring high availability and security in cloud-native environments. Strong expertise in AWS serverless technologies (required): Including Lambda, API Gateway, EventBridge, and Step Functions, to build scalable and cost-efficient solutions. Hands-on experience with Terraform (2+ years required): Critical for managing Infrastructure as Code (IaC) across multiple environments, ensuring consistency and repeatability. CI/CD pipeline development using GitHub Actions (required): Necessary for automating deployments and supporting agile development practices. Scripting skills in Python, Bash, or PowerShell (required): Enables automation of operational tasks and enhances infrastructure management capabilities. Experience with Databricks and Apache Kafka (preferred): Valuable for teams working with data pipelines, MLOps workflows, and event-driven architectures. Education Bachelor s degree in Computer Science or equivalent experience Responsibilities Design, automate, and manage scalable cloud infrastructure using Kubernetes, AWS, Terraform, and CI/CD pipelines . Design and manage cloud-native infrastructure using container orchestration platforms, ensuring high availability, scalability, and security across environments. Implement and maintain Infrastructure as Code (IaC) using tools like Terraform to provision and manage multi-environment cloud resources consistently and efficiently. Develop and optimize continuous integration and delivery (CI/CD) pipelines to automate application and infrastructure deployments, supporting agile development cycles. Monitor system performance and reliability by configuring observability tools for logging, alerting, and metrics collection, and proactively address operational issues. Collaborate with cross-functional teams to align infrastructure solutions with application requirements, ensuring seamless deployment and performance optimization. Document technical processes and architectural decisions through runbooks, diagrams, and knowledge-sharing resources to support operational continuity and team onboarding. About the team Our Data Estate DevOps team is responsible for enabling the scalable, secure, and automated infrastructure that powers Moody s enterprise data platform. We ensure the seamless deployment, monitoring, and performance of data pipelines and services that deliver curated, high-quality data to internal and external consumers. We contribute to Moody s by: Accelerating data delivery and operational efficiency through automation, observability, and infrastructure-as-code practices that support near real-time data processing and remediation. Supporting data integrity and governance by enabling traceable, auditable, and resilient systems that align with regulatory compliance and GenAI readiness. Empowering innovation and analytics by maintaining a modular, interoperable platform that integrates internal and third-party data sources for downstream research models, client workflows, and product applications. By joining our team, you will be part of exciting work in cloud-native DevOps, data engineering, and platform automation, supporting global data operations across 29 countries and contributing to Moody s mission of delivering integrated perspectives on risk and growth.
Posted 5 days ago
4.0 - 6.0 years
5 - 9 Lacs
Bengaluru
Work from Office
About Role: As a Frontend Developer at InstaSafe, you will play a crucial role in building visually engaging, responsive, and high-performing user interfaces that align with our brands identity. You will work closely with the design team to turn concepts into functional web applications, ensuring a seamless user experience across all devices. Your responsibilities will include writing clean, reusable code, optimising frontend performance, and collaborating with backend teams to integrate RESTful APIs. Additionally, you ll ensure cross-browser compatibility, troubleshoot issues, and implement accessibility standards to create inclusive digital experiences. This role offers an exciting opportunity to contribute to cutting-edge projects in cybersecurity while collaborating with a dynamic and innovative team. Key Responsibilities User Interface Development: Collaborate with the design team to build responsive, visually appealing, and user-friendly interfaces, ensuring alignment with brand guidelines. Efficient Frontend Code: Write clean, efficient, and reusable code for frontend components while optimising performance across various devices (memory, CPU). Debugging and Issue Resolution: Diagnose and resolve performance bottlenecks, and debug and extend front-end libraries when required. Pixel-Perfect Design Implementation: Work with Sigma for design collaboration and prototyping, ensuring pixel-perfect UI implementation and smooth communication with design teams. Cross-Browser and Device Compatibility: Ensure consistent functionality and visual integrity across different browsers and devices through rigorous testing and optimization. API Integration: Collaborate closely with backend teams to integrate and consume RESTful APIs, ensuring seamless data flow and interactivity between frontend and backend systems. Performance Optimization: Continuously monitor, analyse, and optimise application performance through techniques such as caching, load management, and resource optimization for scalable and high-performing frontend solutions. Accessibility and Usability: Apply WCAG (Web Content Accessibility Guidelines) standards and best practices to create accessible and inclusive user experiences for all users. Requirements Education : Bachelor s degree in Computer Science, IT, or a related field. Experience : 4-6 years in software development, with a primary focus on Golang in production environments. Frontend Expertise: Strong proficiency in modern frontend development frameworks and tools, particularly AngularJS, HTML5, CSS3, and JavaScript. Experience with PHP (Laravel framework) is a plus. Design and Prototyping Tools: Proficiency in using Figma for collaborating on designs and prototyping. Web Performance Optimization: Understanding of performance optimization techniques, including caching strategies, load management, and working with servers like Apache and Tomcat. Cross-Browser and Device Testing: Experience in ensuring compatibility and consistent performance across multiple browsers and devices. Experience in website enhancement and maintenance is highly desirable. Soft Skills: Strong communication and collaboration skills, with the ability to interact effectively with cross functional teams Proactive attitude towards problem solving and continuous improvement Ability to work under pressure and manage tight deadlines Strong organizational and multitasking abilities Attention to detail and commitment to delivering high quality work Self Motivated with the ability to adapt to evolving technology and organizational needs Benefits Opportunity to work with one of the leading companies in cybersecurity Dynamic and collaborative work culture Involvement in cutting-edge projects in Zero Trust Security
Posted 5 days ago
1.0 - 6.0 years
2 - 5 Lacs
Bengaluru
Work from Office
The driving force behind our success has always been the people of AspenTech. What drives us, is our aspiration, our desire and ambition to keep pushing the envelope, overcoming any hurdle, challenging the status quo to continually find a better way. You will experience these qualities of passion, pride and aspiration in many ways from a rich set of career development programs to support of community service projects to social events that foster fun and relationship building across our global community. The Role This position will work with our Software Development team to ensure high quality through continuous integration and automated testing. This position will promote industry best practices as a part of the Software Engineer in Test team. Your Impact Work with software development groups that create the product as well as engineering groups that deploy the product. Work with scrum team(s) to ensure high quality products and superior customer satisfaction. Author, execute and maintain test charters, test cases, procedures and plans, automated test scripts and code. Design, develop, implement, and improve various test methodologies, plans and automation. Review defect descriptions, requirements, and designs to incorporate into test plans. Installation automation, test automation, release automation and system administration. Build and Configure Continuous Integration setup for test environments. What Youll Need Bachelors degree in Computer Science, Software Engineering or equivalent. Strong hands-on 1+ years experience on Test Automation using Python. 0 to 2 years of software Quality Assurance / Testing experience. Extensive knowledge of industry-leading QA tools and methodologies. Strong understanding of automation testing best practices. High level scripting skills (Python, Perl, Shell scripting). Demonstrated ability in creating and communicating testing status dashboards. Proficiency with Linux, Windows and UNIX environments. Proficiency with a source control system (such as CVS, Subversion, Git, etc.). Excellent written, verbal and interpersonal communication skills; comfortable communicating and working with geographically distributed teams. Innate desires to automate, monitor and continually improve everything in sight. Experience with Quality Assurance within an Agile methodology. Experience with automation frameworks (Selenium, Robot, Apache JMeter). Experience with evaluating, recommending, and deploying new tools and technologies to continually improve the efficiency and effectiveness of the Quality Assurance process. Experience with various database technologies (such as MySQL, Cassandra, MongoDB). Experience in server virtualization, especially VMWare and VSphere. #LI-RK1
Posted 5 days ago
10.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Responsibilities JOB DESCRIPTION Develop using Java and frameworks to rapidly build and deploy applications. Work with OCI engineering leaders to develop scalable, operationally-focused, customer-facing cloud services. Use technologies like Kubernetes to operate highly available, high-performance distributed systems. Automate common tasks to enable continuous delivery and ensure continuous availability with minimal human overhead. Drive performant, forward-thinking solutions to completion on time. Maintain both development and production infrastructure as part of a customer-focused engineering culture. Provide technical guidance and constructive feedback to leadership, team members, and other stakeholders. Contribute to product roadmaps by identifying areas of need and engaging with stakeholders to scope work. Raise the bar for engineering quality and best practices Career Level - IC5 Responsibilities Required Qualifications Passionate, curious and go-getter attitude 10+ years of total experience in software development Demonstrated ability to write great code using Java, GoLang, C#, or similar OO languages Proven ability to deliver products and experience with the full software development lifecycle Experience working on large-scale, highly distributed services infrastructure Experience working in an operational environment with mission-critical tier-one services Systematic problem-solving approach, strong communication skills, a sense of ownership, and drive Experience designing architectures that demonstrate deep technical depth in one area, or span many products, to enable high availability, scalability, market-leading features and flexibility to meet future business demands Strong knowledge of Computer Networking (OSI layers, HTTP, DNS, TCP/IP, DHCP, Routers, Gateways, Subnets, etc.) Preferred Qualifications Hands-on experience developing services on a public cloud platform (e.g., AWS, Azure, Oracle) Building continuous integration/deployment pipelines with robust testing and deployment schedules Strong knowledge of databases (SQL and NoSQL) Knowledge of Linux internals, Linux/Unix troubleshooting skills Experience with Kafka, Apache Spark, Lucene and other big data technologies Able to effectively communicate technical ideas verbally and in writing (technical proposals, design specs, architecture diagrams and presentations) Experience with hiring, mentorship and raising the talent bar across the organization About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less
Posted 5 days ago
8.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Job Description As a Principal Engineer for Networking Control services, you will spearhead the development of innovative services in the Networking Automation & Health domain. This initiative involves building a robust software stack from the ground up to monitor and manage network infrastructure, automatically resolve network issues, and ensure high bandwidth with low latency connections essential for LLM workloads. Our team is developing a cutting-edge software stack to monitor and manage network infrastructure. This system will automatically resolve network issues and ensure high-bandwidth, low-latency connections that are essential for large language model (LLM) workloads. We are seeking a Principal Software Engineer with strong expertise in distributed systems, microservices, high volume data processing and operational excellence. The ideal candidate should possess a strong sense of ownership, Career Level - IC4 Responsibilities Principal Member of Technical Staff - Network Control Services The Oracle Cloud Infrastructure (OCI) team can provide you the opportunity to build and operate a suite of massive scale, integrated cloud services in a broadly distributed, multi-tenant cloud environment. OCI is committed to providing the best in cloud products that meet the needs of our customers who are tackling some of the world’s biggest challenges. We offer unique opportunities for smart, hands-on engineers with the expertise and passion to solve difficult problems in distributed highly available services and virtualized infrastructure. At every level, our engineers have a significant technical and business impact designing and building innovative new systems to power our customer’s business critical applications. Who are we looking for? We are looking for engineers with distributed systems experience. You should have experience with the design of major features and launching them into production. You’ve operated high-scale services and understand how to make them more resilient. You work on most projects and tasks independently. You have experience working with services that require data to travel long distances, but have to abide by compliance and regulations. The ideal candidate will own the software design and development for major components of Oracle’s Cloud Infrastructure. You should be both a rock-solid coder and a distributed systems generalist, able to dive deep into any part of the stack and low-level systems, as well as design broad distributed system interactions. You should value simplicity and scale, work comfortably in a collaborative, agile environment, and be excited to learn. What are the biggest challenges for the team? The team is building a brand new service.The dynamic and fast growth of the business is driving us to build brand new innovative technologies. We understand that software is living and needs investment. The challenge is making the right tradeoffs, communicating those decisions effectively, and crisp execution. We need engineers who can build services that can reliably protect our customer cloud environment. We need engineers who can figure out how we can keep up our solution in a fast pace to securely protect our customers. We need engineers who can build services that enable us to offer even more options to customers and contribute to the overall growth of Oracle Cloud. Required Qualifications BS or MS degree in Computer Science or relevant technical field involving coding or equivalent practical experience 8+ years of total experience in software development Demonstrated ability to write great code using Java, GoLang, C#, or similar OO languages Proven ability to deliver products and experience with the full software development lifecycle Experience working on large-scale, highly distributed services infrastructure Experience working in an operational environment with mission-critical tier-one livesite servicing Systematic problem-solving approach, strong communication skills, a sense of ownership, and drive Experience designing architectures that demonstrate deep technical depth in one area, or span many products, to enable high availability, scalability, market-leading features and flexibility to meet future business demands Preferred Qualifications Hands-on experience developing and maintaining services on a public cloud platform (e.g., AWS, Azure, Oracle) Knowledge of Infrastructure as Code (IAC) languages, preferably Terraform Strong knowledge of databases (SQL and NoSQL) Experience with Kafka, Apache Spark and other big data technologies About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less
Posted 5 days ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Role: Data Engineer with GCP Experience: 5 + years Location: Bangalore | Gurugram | Noida | Pune Notice: Immediate Joiners Mode: Hybrid JD: Develop and automate Python scripts for data processing and transformation. Design, implement, and manage data pipelines to facilitate seamless data integration and flow. Utilize GCP services, particularly BigQuery and Cloud Functions, to support data processing needs. Create and optimize advanced SQL queries for efficient data retrieval and manipulation in BigQuery. Collaborate with cross-functional teams to gather requirements and implement data solutions. Work with Apache and Databricks to enhance data processing capabilities. Show more Show less
Posted 5 days ago
8.0 - 12.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Are you looking for a career move that will put you at the heart of a global financial institution? Then bring your skills in data-driven modelling and data engineering to Citi’s Global FX Team. By Joining Citi, you will become part of a global organization whose mission is to serve as a trusted partner to our clients by responsibly providing financial services that enable growth and economic progress. Team/Role Overview The FX Data Analytics & AI Technology team, within Citi's FX Technology organization, seeks a highly motivated Full Stack Data Scientist / Data Engineer. The FX Data Analytics & Gen AI Technology team provides data, analytics, and tools to Citi FX sales and trading globally and is responsible for defining and executing the overall data strategy for FX. The successful candidate will be responsible for developing and implementing data-driven models, and engineering robust data and analytics pipelines, to unlock actionable insights from our vast amount of global FX data. The role will be instrumental in executing the overall data strategy for FX and will benefit from close interaction with a wide range of stakeholders across sales, trading, and technology. We are looking for a proactive individual with a practical and pragmatic attitude, ability to build consensus, and work both collaboratively and independently in a dynamic environment. What You’ll Do Design, develop and implement quantitative models to derive insights from large and complex FX datasets, with a focus on understanding market trends and client behavior, identifying revenue opportunities, and optimizing the FX business. Engineer data and analytics pipelines using modern, cloud-native technologies and CI/CD workflows, focusing on consolidation, automation, and scalability. Collaborate with stakeholders across sales and trading to understand data needs, translate them into impactful data-driven solutions, and deliver these in partnership with technology. Develop and integrate functionality to ensure adherence with best-practices in terms of data management, need-to-know (NTK), and data governance. Contribute to shaping and executing the overall data strategy for FX in collaboration with the existing team and senior stakeholders. What We’ll Need From You 8 to 12 Years experience Master’s degree or above (or equivalent education) in a quantitative discipline. Proven experience in software engineering and development, and a strong understanding of computer systems and how they operate. Excellent Python programming skills, including experience with relevant analytical and machine learning libraries (e.g., pandas, polars, numpy, sklearn, TensorFlow/Keras, PyTorch, etc.), in addition to visualization and API libraries (matplotlib, plotly, streamlit, Flask, etc). Experience developing and implementing Gen AI applications from data in a financial context. Proficiency working with version control systems such as Git, and familiarity with Linux computing environments. Experience working with different database and messaging technologies such as SQL, KDB, MongoDB, Kafka, etc. Familiarity with data visualization and ideally development of analytical dashboards using Python and BI tools. Excellent communication skills, both written and verbal, with the ability to convey complex information clearly and concisely to technical and non-technical audiences. Ideally, some experience working with CI/CD pipelines and containerization technologies like Docker and Kubernetes. Ideally, some familiarity with data workflow management tools such as Airflow as well as big data technologies such as Apache Spark/Ignite or other caching and analytics technologies. A working knowledge of FX markets and financial instruments would be beneficial. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less
Posted 5 days ago
6.0 - 11.0 years
9 - 14 Lacs
Gurugram
Work from Office
In one sentence Responsible for design, development, modification, debug and/or maintenance of software systems What will your job look like Job Title: DataStax Cassandra Engineer (Charging Application) We are seeking a highly skilled DataStax Cassandra Engineer to join our team. The ideal candidate will have strong expertise in Apache Cassandra and DataStax Enterprise (DSE), with experience in database administration, performance tuning, and troubleshooting. Key Responsibilities: Install, configure, and maintain DataStax Enterprise (DSE) Cassandra clusters. Optimize read/write performance and ensure high availability. Implement backup, restore, and disaster recovery strategies. Monitor system performance and proactively address potential issues. Collaborate with development teams to design scalable database solutions. Ensure security best practices for database access and management. Provide 24/7 support for production environments as needed. All you need is... Why you will love this job: Required Skills Qualifications: 6+ years of experience in NoSQL database administration. Strong knowledge of DataStax Enterprise (DSE) platform and Cassandra architecture,preferably in telecom applications. Proficiency in Linux/Unix systems and Shell scripting. Experience with Cassandra tools (nodetool, cqlsh, OpsCenter). Expertise in performance tuning and query optimization. Familiarity with cloud-based deployments (Azure). Preferred Qualifications: Experience with other database technologies (PostgreSQL, Redis, Oracle). Knowledge of Kubernetes and containerized environments
Posted 5 days ago
5.0 - 10.0 years
9 - 13 Lacs
Pune
Work from Office
0px> Who are we In one sentence We are seeking a skilled and motivated DevOps Engineer to join our team. As a DevOps Engineer, you will play a key role in bridging the gap between development and operations by ensuring the smooth operation of our IT infrastructure. You will be responsible for automating deployment processes, managing cloud and on-premises infrastructure, and ensuring the stability, scalability, and security of our systems. What will your job look like All you need is... Key Responsibilities: Design, implement, and maintain CI/CD pipelines to automate code deployment and delivery. Manage and optimize cloud infrastructure (AWS, Azure, etc.) and on-premises systems Deep understanding of IIS, Apache, Python, PostGres SQL, OPenAI and Bedrock. OS (Windows, Linux) Monitor and troubleshoot system performance, ensuring high availability and reliability. Collaborate with development teams to ensure smooth integration of code changes into production environments. Develop and maintain automated scripts for deployment, scaling, and backups. Ensure security best practices are followed in infrastructure and application deployment. Work closely with cross-functional teams to identify and resolve operational issues. Stay up-to-date with industry trends and emerging technologies to continuously improve the DevOps practices. Qualifications:. Minimum of 5-10 years of experience in DevOps, IT infrastructure, or a related field. Strong understanding of cloud platforms (AWS, Azure, ) and containerization technologies (Docker, Kubernetes). Experience with DevOps tools such as Jenkins, GitLab CI/CD, GitHub Actions, etc. Familiarity with monitoring tools like Prometheus, Grafana, Nagios, or Datadog. Strong understanding of version control systems like Git. Good understanding of networking fundamentals (TCP/IP, DNS, HTTP/HTTPS, etc.). Ability to work in a collaborative team environment and communicate effectively with both technical and non-technical stakeholders. Why you will love this job: Scripting/Programming:- Python (basic scripting for automation) PowerShell (for Microsoft environments) Bash (for Linux environments) Source Control:- Git (basic understanding of repositories, branching, and merging) Familiarity with GitLab, GitHub, or Azure DevOps for version control. Infrastructure:- Microsoft Windows Server (basic understanding of Active Directory, IIS, etc.) Linux distributions (Ubuntu, CentOS, or RHEL) Basic understanding of cloud platforms (Azure, AWS, or GCP). CI/CD:- Jenkins (basic pipeline creation) Azure DevOps Pipelines Understanding of CI/CD concepts. Networking Security:- Basic understanding of networking (DNS, DHCP, TCP/IP) Firewall configuration (Windows Defender, iptables, etc.) Security best practices (IAM, least privilege, etc.). Monitoring Logging:- Prometheus and Grafana (basic monitoring) ELK Stack (Elasticsearch, Logstash, Kibana) Windows Event Viewer and Linux syslog. Containerization:- Docker (basic container creation and management) Kubernetes (introductory understanding).Qualifications:. Minimum of 3-5 years of experience in DevOps, IT infrastructure, or a related file. Design, implement, and maintain CI/CD pipelines to automate code deployment and delivery. Manage and optimize cloud infrastructure (AWS, Azure, etc.) and on-premises systems Deep understanding of IIS, Apache, Python, PostGres SQL, OPenAI and Bedrock. OS (Windows, Linux) Monitor and troubleshoot system performance, ensuring high availability and reliability. Collaborate with development teams to ensure smooth integration of code changes into production environments. Develop and maintain automated scripts for deployment, scaling, and backups. Ensure security best practices are followed in infrastructure and application deployment. Work closely with cross-functional teams to identify and resolve operational issues. Stay up-to-date with industry trends and emerging technologies to continuously improve the DevOps practices. Qualifications:. Minimum of 5-10 years of experience in DevOps, IT infrastructure, or a related field. Strong understanding of cloud platforms (AWS, Azure, ) and containerization technologies (Docker, Kubernetes). Experience with DevOps tools such as Jenkins, GitLab CI/CD, GitHub Actions, etc. Familiarity with monitoring tools like Prometheus, Grafana, Nagios, or Datadog. Strong understanding of version control systems like Git. Good understanding of networking fundamentals (TCP/IP, DNS, HTTP/HTTPS, etc.). Ability to work in a collaborative team environment and communicate effectively with both technical and non-technical stakeholders
Posted 5 days ago
2.0 - 4.0 years
2 - 6 Lacs
Pune
Work from Office
Location: Pune Experience: 2-4 Years Job Type: Full-Time Work Model: On-site Job Overview: We are looking for a hands-on DevOps/System Administrator with strong experience in firewalls, networking, on-premise and cloud infrastructure, and automation tools . The ideal candidate should have practical expertise in Hardware Firewalls, Endpoint Security , Linux/Windows server management , security practices , and monitoring tools and AWS along with a proactive problem-solving attitude. Ideal candidate should have experience working in enterprise applications. Key Responsibilities: Firewall Networking: Manage and configure hardware firewalls (Sophos, Fortinet, SonicWall). Implement and troubleshoot IPSec VPNs , web/application filtering , NATing , and routing policies . Monitor and analyse firewall logs for suspicious activities. Configure LAN/WAN segmentation and Network peering for hybrid infrastructure. Server Administration (On-Prem Cloud): Provision, configure, and maintain Linux (Ubuntu, CentOS) and Windows Servers . Manage VMware/Hypervisor/GenCenter virtualization platforms. Perform patch management , backups, and disaster recovery planning . Troubleshoot performance issues (CPU, memory, I/O latency) using tools like top, iotop, journalctl, etc. Automation Monitoring: Write and manage Ansible Playbooks and inventory files for bulk server updates. Deploy and manage monitoring solutions ( Grafana, Prometheus, Nagios, CloudWatch ). Understand PromQL for custom monitoring and alerting. Use ManageEngine, SCCM , or similar tools for patch management and compliance. Security Compliance: Implement endpoint security solutions (e.g., Trend Micro, Kaspersky, CrowdStrike, Cortex, NetProtect). Handle incident response: isolate infected systems, analyze malware, enforce USB and device control policies. Maintain access control using IAM policies and Access/Secret Keys . Conduct Root Cause Analysis (RCA) for incidents and document preventive measures. Cloud Infrastructure (AWS Preferred): Configure and manage AWS EC2, S3, IAM, VPC, Route Tables, CloudWatch . Implement secure communication between private EC2 instances across multiple VPCs . Automate infrastructure provisioning using Ansible / Terraform and manage state files securely. Implement CloudWatch Alarms , monitoring dashboards, and log analysis . Required Skills: Solid knowledge of Hardware firewall configurations , VPN setup , and network troubleshooting . Endpoint Security (Central Administration of the Antivirus, Disk Encryption, etc) Hands-on with Linux/Windows server troubleshooting , patching, and performance tuning. Good experience in AWS services , IAM roles , CLI tools , and S3-EC2 integrations . Proficient in Terraform , Ansible , or similar IaC and orchestration tools . Familiarity with monitoring tools and writing queries (PromQL or equivalent). Understanding of endpoint protection tools and incident management workflows . Good to Have: Exposure to Azure cloud services. Experience in deploying web applications (Apache, Node.js, ReactJS, MongoDB). Familiarity with Docker , Kubernetes , and CI/CD pipelines . Knowledge of Disaster Recovery (DR) strategies in hybrid environments. What Kind of Person Fits This Role: Someone who likes solving tech problems and can handle pressure when things go wrong. Working in shift timings starting at 6AM to 12AM Comfortable working with both physical devices and cloud systems . Knows how to automate tasks to save time. Good at explaining technical stuff and writing documentation .
Posted 5 days ago
7.0 - 12.0 years
9 - 13 Lacs
Bengaluru
Work from Office
Automation NoSQL Data Engineer This role has been designed as Onsite with an expectation that you will primarily work from an HPE partner/customer office. Who We Are: Hewlett Packard Enterprise is the global edge-to-cloud company advancing the way people live and work. We help companies connect, protect, analyze, and act on their data and applications wherever they live, from edge to cloud, so they can turn insights into outcomes at the speed required to thrive in today s complex world. Our culture thrives on finding new and better ways to accelerate what s next. We know varied backgrounds are valued and succeed here. We have the flexibility to manage our work and personal needs. We make bold moves, together, and are a force for good. If you are looking to stretch and grow your career our culture will embrace you. Open up opportunities with HPE. : HPE Operations is our innovative IT services organization. It provides the expertise to advise, integrate, and accelerate our customers outcomes from their digital transformation. Our teams collaborate to transform insight into innovation. In today s fast paced, hybrid IT world, being at business speed means overcoming IT complexity to match the speed of actions to the speed of opportunities. Deploy the right technology to respond quickly to market possibilities. Join us and redefine what s next for you. What you will do: Think through complex data engineering problems in a fast-paced environment and drive solutions to reality. Work in a dynamic, collaborative environment to build DevOps-centered data solutions using the latest technologies and tools. Provide engineering-level support for data tools and systems deployed in customer environments. Respond quickly and professionally to customer emails/requests for assistance. What you need to bring: Bachelor s degree in Computer Science, Information Systems, or equivalent. 7+ years of demonstrated experience working in software development teams with a strong focus on NoSQL databases and distributed data systems. Strong experience in automated deployment, troubleshooting, and fine-tuning technologies such as Apache Cassandra, Clickhouse, MongoDB, Apache Spark, Apache Flink, Apache Airflow, and similar technologies. Technical Skills: Strong knowledge of NoSQL databases such as Apache Cassandra, Clickhouse, and MongoDB, including their installation, configuration, and performance tuning in production environments. Expertise in deploying and managing real-time data processing pipelines using Apache Spark, Apache Flink, and Apache Airflow. Experience in deploying and managing Apache Spark and Apache Flink operators on Kubernetes and other containerized environments, ensuring high availability and scalability of data processing jobs. Hands-on experience in configuring and optimizing Apache Spark and Apache Flink clusters, including fine-tuning resource allocation, fault tolerance, and job execution. Proficiency in authoring, automating, and optimizing Apache Airflow DAGs for orchestrating complex data workflows across Spark and Flink jobs. Strong experience with container orchestration platforms (like Kubernetes) to deploy and manage Spark/Flink operators and data pipelines. Proficiency in creating, managing, and optimizing Airflow DAGs to automate data pipeline workflows, handle retries, task dependencies, and scheduling. Solid experience in troubleshooting and optimizing performance in distributed data systems. Expertise in automated deployment and infrastructure management using tools such as Terraform, Chef, Ansible, Kubernetes, or similar technologies. Experience with CI/CD pipelines using tools like Jenkins, GitLab CI, Bamboo, or similar. Strong knowledge of scripting languages such as Python, Bash, or Go for automation, provisioning Platform-as-a-Service, and workflow orchestration. Additional Skills: Accountability, Accountability, Active Learning (Inactive), Active Listening, Bias, Business Growth, Client Expectations Management, Coaching, Creativity, Critical Thinking, Cross-Functional Teamwork, Customer Centric Solutions, Customer Relationship Management (CRM), Design Thinking, Empathy, Follow-Through, Growth Mindset, Information Technology (IT) Infrastructure, Infrastructure as a Service (IaaS), Intellectual Curiosity (Inactive), Long Term Planning, Managing Ambiguity, Process Improvements, Product Services, Relationship Building {+ 5 more} What We Can Offer You: Health Wellbeing We strive to provide our team members and their loved ones with a comprehensive suite of benefits that supports their physical, financial and emotional wellbeing. Personal Professional Development We also invest in your career because the better you are, the better we all are. We have specific programs catered to helping you reach any career goals you have whether you want to become a knowledge expert in your field or apply your skills to another division. Unconditional Inclusion We are unconditionally inclusive in the way we work and celebrate individual uniqueness. We know varied backgrounds are valued and succeed here. We have the flexibility to manage our work and personal needs. We make bold moves, together, and are a force for good. Lets Stay Connected: Follow @HPECareers on Instagram to see the latest on people, culture and tech at HPE. #india #operations Job: Services Job Level: TCP_03 HPE is an Equal Employment Opportunity/ Veterans/Disabled/LGBT employer. We do not discriminate on the basis of race, gender, or any other protected category, and all decisions we make are made on the basis of qualifications, merit, and business need. Our goal is to be one global team that is representative of our customers, in an inclusive environment where we can continue to innovate and grow together. Please click here: Equal Employment Opportunity . Hewlett Packard Enterprise is EEO Protected Veteran/ Individual with Disabilities. HPE will comply with all applicable laws related to employer use of arrest and conviction records, including laws requiring employers to consider for employment qualified applicants with criminal histories.
Posted 5 days ago
6.0 - 10.0 years
5 - 9 Lacs
Mumbai
Work from Office
Job Description Cold calling to prospective customer (Big Brands / Enterprise / Corporate) who wants to keep their product in our warehouses for doing their B2B, B2C and D2C Business Understanding requirements of the customer and analysing profitability of the company while generating new business. Arranging meeting Doing feasibility study and preparing service contract with the prospective customer Cross selling to existing customer and up selling Ensure proper documentation and formalities in coordination with the On boarding team. Qualification: MBA/PGDBM Experience: 6-10 years Industry Exposure: Any industry except edutech and hotel. Candidates from the experience of Enterprise Business / Consumer Brands will be preferred. Pre-Requisite: Cold Callings 30/40 calls per day Outbound Expert Dig and get data and get meetings Meetings 2 Meetings per day
Posted 5 days ago
16.0 - 21.0 years
9 - 10 Lacs
Hyderabad
Work from Office
Job Requirements Service Reliability Engineer I Monitoring Team with L1 Resources for all domains to cover 24x7 IT Environment (Server, Network, Application, Storage and Database) Management of alerts raised by infrastructure elements Management of alerts raised by Application Services Perform daily health checks (Network, Servers Datacenter) Knowledge of Windows, Linux Network Infrastructure Perform operations based on the documented procedures Assist in the analysis of the reporting and alerts raised by various infrastructure devices Fine-tuning of configuration to maintain the performance and functionality of the monitoring solutions in place. Managing Incidents Roles Responsibilities: 24x7 proactive monitoring of server, storage, backup and network environment alerts via monitoring tool and Email Escalations and follow-up with the IT System Admin team as well as the specific application team on pending high-priority trouble tickets Prepare and maintain Documentation, Reports, and provide follow-up status on identified tasks On time Escalation and Reporting of alerts according to the Incident Management process Daily / Weekly Report preparation based on the specified already agreed format, and sending the same to the pre-assigned set of recipients Sending the reports on the specified time and day, and informing the concerned recipients in terms of any delays due to any dependencies Escalate the incidents based on the standard procedure and run-down follow-up reporting per team and area. Escalate incidents till closure Maintain, update and implement the standard escalation procedures, complete with notification matrix and escalation standards Work Experience Good Communication Skills Strong Linux administration skills in various flavors (CentOS, Ubuntu and Red Hat). Troubleshooting skills in Booting Problems Should have an understanding of the Incident management process(ITIL). Good skills in incident tracking from Logs. Good Skills in Shell Scripting Networking Skills Knowledge of Web servers (Apache, Nginx, etc.) File servers like FTP, NFS and SAMBA Additional Advantage: AWS Azure Cloud knowledge, Docker, Jenkins Education: B.Tech or 16+ years of full-time education. Work Experience Benefits We want you to be your best self and to pursue your passions! Health and wellness benefits/programs to support holistic employee health Flexible hours and working schedules, as well as parental leave for new parents Growing organization with career pathing and development opportunities Tons of perks and extras in every location for all Phenoms! Diversity, Equity, Inclusion: Our commitment to diversity runs deep! Diversity is essential to building phenomenal teams, products, and customer experiences. Phenom is proud to be an equal opportunity employer taking collective action to build a more inclusive environment where every candidate and employee feels welcomed. We recognize there is more to be done. Our teams are committed to continuous improvement until these powerful ideas are ingrained in our culture for Phenom and employers everywhere
Posted 5 days ago
8.0 - 13.0 years
8 - 11 Lacs
Chennai
Work from Office
Job Description: FYNXT is a Singapore based Software Product Development company that provides a Software as a Service (SaaS) platform to digitally transform leading brokerage firms and fund management companies and help them grow their market share. Our industry leading Digital Front office platform has transformed several leading financial institutions in the Forex industry to go fully digital to optimize their operations, cut costs and become more profitable. For more visit: www.fynxt.com Key Responsibilities: Architect Build Scalable Systems: Design and implement petabyte-scale lakehouse architectures (Apache Iceberg, Delta Lake) to unify data lakes and warehouses. Real-Time Data Engineering: Develop and optimize streaming pipelines using Kafka, Pulsar, and Flink to process structured/unstructured data with low latency. High-Performance Applications: Leverage Java to build scalable, high-throughput data applications and services. Modern Data Infrastructure: Leverage modern data warehouses and query engines (Trino, Spark) for sub-second operation and analytics on real-time data. Database Expertise: Work with RDBMS (PostgreSQL, MySQL, SQL Server) and NoSQL (Cassandra, MongoDB) systems to manage diverse data workloads. Data Governance: Ensure data integrity, security, and compliance across multi-tenant systems. Cost Performance Optimization: Manage production infrastructure for reliability, scalability, and cost efficiency. Innovation: Stay ahead of trends in the data ecosystem (e.g., Open Table Formats, stream processing) to drive technical excellence. API Development (Optional): Build and maintain Web APIs (REST/GraphQL) to expose data services internally and externally. Qualifications: 8+ years of data engineering experience with large-scale systems (petabyte-level). Expert proficiency in Java for data-intensive applications. Hands-on experience with lakehouse architectures, stream processing (Flink), and event streaming (Kafka/Pulsar). Strong SQL skills and familiarity with RDBMS/NoSQL databases. Proven track record in optimizing query engines (e.g., Spark, Presto) and data pipelines. Knowledge of data governance, security frameworks, and multi-tenant systems. Experience with cloud platforms (AWS, GCP, Azure) and infrastructure-as-code (Terraform). What we offer Unique experience in Fin-Tech industry, with a leading, fast-growing company. Good atmosphere at work and a comfortable working environment. Additional benefit of Group Health Insurance OPD Health Insurance Coverage for Self + Family (Spouse and up to 2 Children) Attractive Leave benefits like Maternity, Paternity Benefit, Vacation leave Leave Encashment Reward Recognition Monthly, Quarterly, Half yearly yearly. Loyalty benefits
Posted 5 days ago
7.0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
Job Description Are you ready to make an impact at DTCC? Do you want to work on innovative projects, collaborate with a dynamic and supportive team, and receive investment in your professional development? At DTCC, we are at the forefront of innovation in the financial markets. We are committed to helping our employees grow and succeed. We believe that you have the skills and drive to make a real impact. We foster a thriving internal community and are committed to creating a workplace that looks like the world that we serve. The Information Technology group delivers secure, reliable technology solutions that enable DTCC to be the trusted infrastructure of the global capital markets. The team delivers high-quality information through activities that include development of essential, building infrastructure capabilities to meet client needs and implementing data standards and governance. Pay And Benefits Competitive compensation, including base pay and annual incentive Comprehensive health and life insurance and well-being benefits, based on location Pension / Retirement benefits Paid Time Off and Personal/Family Care, and other leaves of absence when needed to support your physical, financial, and emotional well-being. DTCC offers a flexible/hybrid model of 3 days onsite and 2 days remote (onsite Tuesdays, Wednesdays and a third day unique to each team or employee). The Impact You Will Have In This Role The Development family is responsible for creating, designing, deploying, and supporting applications, programs, and software solutions. May include research, new development, prototyping, modification, reuse, re-engineering, maintenance, or any other activities related to software products used internally or externally on product platforms supported by the firm. The software development process requires in-depth subject matter expertise in existing and emerging development methodologies, tools, and programming languages. Software Developers work closely with business partners and / or external clients in defining requirements and implementing solutions. The Software Engineering role specializes in planning, documenting technical requirements, designing, developing, and testing all software systems and applications for the firm. Works closely with architects, product managers, project management, and end-users in the development and enhancement of existing software systems and applications, proposing and recommending solutions that solve complex business problems. Your Primary Responsibilities Act as a technical expert on one or more applications utilized by DTCC Work with the Business System Analyst to ensure designs satisfy functional requirements Partner with Infrastructure to identify and deploy optimal hosting environments Participate in code development, code deploys while working as individual or in team projects Tune application performance to eliminate and reduce issues Research and evaluate technical solutions consistent with DTCC technology standards Align risk and control processes into day to day responsibilities to monitor and mitigate risk; escalates appropriately Apply different software development methodologies dependent on project needs Contribute expertise to the design of components or individual programs, and participate in the construction and functional testing Support development teams, testing, troubleshooting, and production support Create applications and construct unit test cases that ensure compliance with functional and non-functional requirements Work with peers to mature ways of working, continuous integration, and continuous delivery Aligns risk and control processes into day to day responsibilities to monitor and mitigate risk; escalates appropriately Qualifications Minimum of 7 years of related experience Bachelor's degree preferred or equivalent experience Talents Needed For Success Hands on experience in software development using Design Patterns, Java, Java EE, Spring Boot, Spring 6, JMS, REST API, Middleware like IBM MQ, Tomcat, Liberty, WebSphere Demonstrated capability working with middleware like IBM MQ, Apache Kafka, Amazon EventBridge and other messaging frameworks Familiarity working with relational database Oracle with experience in developing stored procedure and managing database schema and tables. Familiarity on UI frameworks like Angular or other java scripts is a plus. Familiar developing and running applications in Windows and Linux environments and container technologies like Docker, Kubernetes, OpenShift will be a plus. Demonstrable experience in software development using CI/CD tools especially GIT, Bitbucket, Maven, Jenkins, Jira Experience using the following development tools: Visual Studio, IntelliJ, or Eclipse. Familiarity with different software development methodologies (Waterfall, Agile, Scrum, Kanban) The salary range is indicative for roles at the same level within DTCC across all US locations. Actual salary is determined based on the role, location, individual experience, skills, and other considerations. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. About Us With over 50 years of experience, DTCC is the premier post-trade market infrastructure for the global financial services industry. From 20 locations around the world, DTCC, through its subsidiaries, automates, centralizes, and standardizes the processing of financial transactions, mitigating risk, increasing transparency, enhancing performance and driving efficiency for thousands of broker/dealers, custodian banks and asset managers. Industry owned and governed, the firm innovates purposefully, simplifying the complexities of clearing, settlement, asset servicing, transaction processing, trade reporting and data services across asset classes, bringing enhanced resilience and soundness to existing financial markets while advancing the digital asset ecosystem. In 2024, DTCC’s subsidiaries processed securities transactions valued at U.S. $3.7 quadrillion and its depository subsidiary provided custody and asset servicing for securities issues from over 150 countries and territories valued at U.S. $99 trillion. DTCC’s Global Trade Repository service, through locally registered, licensed, or approved trade repositories, processes more than 25 billion messages annually. To learn more, please visit us at www.dtcc.com or connect with us on LinkedIn , X , YouTube , Facebook and Instagram . DTCC proudly supports Flexible Work Arrangements favoring openness and gives people freedom to do their jobs well, by encouraging diverse opinions and emphasizing teamwork. When you join our team, you’ll have an opportunity to make meaningful contributions at a company that is recognized as a thought leader in both the financial services and technology industries. A DTCC career is more than a good way to earn a living. It’s the chance to make a difference at a company that’s truly one of a kind. Learn more about Clearance and Settlement by clicking here . About The Team The IT SIFMU Delivery Department supports core Clearing and Settlement application delivery for DTC, NSCC and FICC. The department also develops and supports Asset Services, Wealth Management & Insurance Services and Master Reference Data applications. Show more Show less
Posted 5 days ago
7.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description The Applications Development Technology Lead Analyst is a senior level position responsible for establishing and implementing new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to lead applications systems analysis and programming activities. At least two years (Over all 10+ hands on Data Engineering experience) of experience building and leading highly complex, technical data engineering teams. Lead data engineering team, from sourcing to closing. Drive strategic vision for the team and product Responsibilities: Partner with multiple management teams to ensure appropriate integration of functions to meet goals as well as identify and define necessary system enhancements to deploy new products and process improvements Experience managing an data focused product,ML platform Hands on experience relevant experience in design, develop, and optimize scalable distributed data processing pipelines using Apache Spark and Scala. Resolve variety of high impact problems/projects through in-depth evaluation of complex business processes, system processes, and industry standards Provide expertise in area and advanced knowledge of applications programming and ensure application design adheres to the overall architecture blueprint Utilize advanced knowledge of system flow and develop standards for coding, testing, debugging, and implementation Experience managing, hiring and coaching software engineering teams. Experience with large-scale distributed web services and the processes around testing, monitoring, and SLAs to ensure high product quality. Provide in-depth analysis with interpretive thinking to define issues and develop innovative solutions Serve as advisor or coach to mid-level developers and analysts, allocating work as necessary Required Skills: Experience: 7 to 10+ years of hands-on experience in big data development, focusing on Apache Spark, Scala, and distributed systems. Proficiency in Functional Programming: High proficiency in Scala-based functional programming for developing robust and efficient data processing pipelines. Proficiency in Big Data Technologies: Strong experience with Apache Spark, Hadoop ecosystem tools such as Hive, HDFS, and YARN. AIRFLOW, DataOps, Data Management Programming and Scripting: Advanced knowledge of Scala and a good understanding of Python for data engineering tasks. Data Modeling and ETL Processes: Solid understanding of data modeling principles and ETL processes in big data environments. Analytical and Problem-Solving Skills: Strong ability to analyze and solve performance issues in Spark jobs and distributed systems. Version Control and CI/CD: Familiarity with Git, Jenkins, and other CI/CD tools for automating the deployment of big data applications. Desirable Experience: Real-Time Data Streaming: Experience with streaming platforms such as Apache Kafka or Spark Streaming.Python Data Engineering Experience is Plus Financial Services Context: Familiarity with financial data processing, ensuring scalability, security, and compliance requirements. Leadership in Data Engineering: Proven ability to work collaboratively with teams to develop robust data pipelines and architectures. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less
Posted 5 days ago
6.0 - 8.0 years
6 - 10 Lacs
Bengaluru
Work from Office
" Senior Data Engineer (Contract) Location: Bengaluru, Karnataka, India About the Role: Were looking for an experienced Senior Data Engineer (6-8 years) to join our data team. Youll be key in building and maintaining our data systems on AWS. Youll use your strong skills in big data tools and cloud technology to help our analytics team get valuable insights from our data. Youll be in charge of the whole process of our data pipelines, making sure the data is good, reliable, and fast. What Youll Do: Design and build efficient data pipelines using Spark / PySpark / Scala . Manage complex data processes with Airflow , creating and fixing any issues with the workflows ( DAGs ). Clean, transform, and prepare data for analysis. Use Python for data tasks, automation, and building tools. Work with AWS services like S3, Redshift, EMR, Glue, and Athena to manage our data infrastructure. Collaborate closely with the Analytics team to understand what data they need and provide solutions. Help develop and maintain our Node.js backend, using Typescript , for data services. Use YAML to manage the settings for our data tools. Set up and manage automated deployment processes ( CI/CD ) using GitHub Actions . Monitor and fix problems in our data pipelines to keep them running smoothly. Implement checks to ensure our data is accurate and consistent. Help design and build data warehouses and data lakes. Use SQL extensively to query and work with data in different systems. Work with streaming data using technologies like Kafka for real-time data processing. Stay updated on the latest data engineering technologies. Guide and mentor junior data engineers. Help create data management rules and procedures. What Youll Need: Bachelors or Masters degree in Computer Science, Engineering, or a related field. 6-8 years of experience as a Data Engineer. Strong skills in Spark and Scala for handling large amounts of data. Good experience with Airflow for managing data workflows and understanding DAGs . Solid understanding of how to transform and prepare data. Strong programming skills in Python for data tasks and automation.. Proven experience working with AWS cloud services (S3, Redshift, EMR, Glue, IAM, EC2, and Athena ). Experience building data solutions for Analytics teams. Familiarity with Node.js for backend development. Experience with Typescript for backend development is a plus. Experience using YAML for configuration management. Hands-on experience with GitHub Actions for automated deployment ( CI/CD ). Good understanding of data warehousing concepts. Strong database skills - OLAP/OLTP Excellent command of SQL for data querying and manipulation. Experience with stream processing using Kafka or similar technologies. Excellent problem-solving, analytical, and communication skills. Ability to work well independently and as part of a team. Bonus Points: Familiarity with data lake technologies (e.g., Delta Lake, Apache Iceberg). Experience with other stream processing technologies (e.g., Flink, Kinesis). Knowledge of data management, data quality, statistics and data governance frameworks. Experience with tools for managing infrastructure as code (e.g., Terraform). Familiarity with container technologies (e.g., Docker, Kubernetes). Experience with monitoring and logging tools (e.g., Prometheus, Grafana).
Posted 5 days ago
8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description About Oracle APAC ISV Business Oracle APAC ISV team is one of the fastest-growing and high-performing business units in APAC. We are a prime team that operates to serve a broad range of customers across the APAC region. ISVs are at the forefront of today's fastest-growing industries. Much of this growth stems from enterprises shifting toward adopting cloud-native ISV SaaS solutions. This transformation drives ISVs to evolve from traditional software vendors to SaaS service providers. Industry analysts predict exponential growth in the ISV market over the coming years, making it a key growth pillar for every hyperscaler. Our Cloud engineering team works on pitch-to-production scenarios of bringing ISVs’ solutions on the Oracle cloud (#oci) with an aim to provide a cloud platform for running their business which is better performant, more flexible, more secure, compliant to open-source technologies and offers multiple innovation options yet being most cost effective. The team walks along the path with our customers and are being regarded as a trusted techno-business advisors by them. Required Skills/Experience Your versatility and hands-on expertise will be your greatest asset as you deliver on time bound implementation work items and empower our customers to harness the full power of OCI. We also look for: Bachelor's degree in Computer Science, Information Technology, or a related field. Relevant certifications in database management, OCI or other cloud platforms (AWS, Azure, Google Cloud), or NoSQL databases 8+ years of professional work experience Proven experience in migrating databases and data to OCI or other cloud environments (AWS, Azure, Google Cloud, etc.). Expertise on Oracle DB and related technologies like RMAN, DataGuard, Advanced Security Options, MAA Hands on experience with NoSQL databases (MongoDB / Cassandra / DynamoDB, etc.) and other DBs like MySQL/PostgreSQL Demonstrable expertise in Data Management systems, caching systems and search engines such as MongoDB, Redshift, Snowflake, Spanner, Redis, ElasticSearch, as well as Graph databases like Neo4J An understanding of complex data integration, data pipelines and stream analytics using products like Apache Kafka, Oracle GoldenGate, Oracle Stream Analytics, Spark etc. Knowledge of how to deploy data management within a Kubernetes/docker environment as well as the corresponding management of state in microservice applications is a plus Ability to work independently and handle multiple tasks in a fast-paced environment. Solid experience managing multiple implementation projects simultaneously while maintaining high-quality standards. Ability to develop and manage project timelines, resources, and budgets. Career Level - IC4 Responsibilities What You’ll Do As a solution specialist, you will work closely with our cloud architects and key stakeholders of ISVs to propagate awareness and drive implementation of OCI native as well as open-source technologies by ISV customers. Lead and execute end-to-end data platforms migrations (including heterogeneous data platforms) to OCI. Design and implement database solutions within OCI, ensuring scalability, availability, and performance. Set up, configure, and secure production environments for data platforms in OCI Migrate databases from legacy systems or other Clouds to OCI while ensuring minimal downtime and data integrity. Implement and manage CDC solutions to track and capture changes in databases in real-time. Configure and manage CDC tools, ensuring low-latency, fault-tolerant data replication for high-volume environments. Assist with the creation of ETL/data pipelines for the migration of large datasets into data warehouse on OCI Configure and manage complex database deployment topologies, including clustering, replication, and failover configurations. Perform database tuning, monitoring, and optimization to ensure high performance in production environments. Implement automation scripts and tools to streamline database administration and migration processes. Develop and effectively present your proposed solution and execution plan to both internal and external stakeholders. Clearly explain the technical advantages of OCI based database management systems About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less
Posted 5 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Apache is a widely used software foundation that offers a range of open-source software solutions. In India, the demand for professionals with expertise in Apache tools and technologies is on the rise. Job seekers looking to pursue a career in Apache-related roles have a plethora of opportunities in various industries. Let's delve into the Apache job market in India to gain a better understanding of the landscape.
These cities are known for their thriving IT sectors and see a high demand for Apache professionals across different organizations.
The salary range for Apache professionals in India varies based on experience and skill level. - Entry-level: INR 3-5 lakhs per annum - Mid-level: INR 6-10 lakhs per annum - Experienced: INR 12-20 lakhs per annum
In the Apache job market in India, a typical career path may progress as follows: 1. Junior Developer 2. Developer 3. Senior Developer 4. Tech Lead 5. Architect
Besides expertise in Apache tools and technologies, professionals in this field are often expected to have skills in: - Linux - Networking - Database Management - Cloud Computing
As you embark on your journey to explore Apache jobs in India, it is essential to stay updated on the latest trends and technologies in the field. By honing your skills and preparing thoroughly for interviews, you can position yourself as a competitive candidate in the Apache job market. Stay motivated, keep learning, and pursue your dream career with confidence!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.