Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 - 11.0 years
4 - 8 Lacs
bengaluru
Work from Office
IBM is seeking talented and movtivated Software Developers to work on products across the Software Portfolio. In this role, you will collaborate closely with a worldwide team of experienced and energetic professionals to create new and expected capabilities in our industry-leading Sustainability Software portfolio. You will work across the entire software delivery cycle including requirements gathering, use case definition, design, implementation, test, test automation documentation, and delivery, including cloud infrastructure such as DevOps, SRE, Security, and customer support work as part of your scope. You'll frequently work with Product Owners, Architects, Release Managers, Designers, Sales, Support Business partners, and customers throughout this process. As a planning analytics developer, you should be technically proficient in working with data integration tools and building data models/cubes in Planning Analytics to build and support Envizi’s Planning & Forecasting capabilities. Act as a technical expert in Planning Analytics (PA) and TM1 for internal Product Development Develop and evolve Envizi’s Planning Analytics models to support the wider Envizi team and our clients, promoting Planning Analytics model governance principles and applying a best-in-class design and approach to Planning Analytics model building. Help develop technical documentation for end users of Envizi’s Planning Analytics models. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total experience of 6+ years inPlanning Analytics, TM1 design and delivery/implementation Proven ability to translate business and technical requirements into effective solutions. Familiarity with data engineering and basic data science methodologies. Strong problem-solving and communication skills for collaborating with team members and stakeholders. Experience with budgeting, forecasting, analysis, and management reporting solutions. Work with Planning Analytics-adjacent systems and teams to build upon our connected planning ecosystem Preferred technical and professional experience Development experience in Data Analytics using Java or Python or equivalent language Relevant experience with technologies like SQL and NoSQL DB , Git, Jenkins and build tools like Maven/Gradle is much preferred. Familiarity with Cassandra, Mongo, ElasticSearch, Flink and/or Kafka.
Posted Just now
5.0 - 10.0 years
22 - 27 Lacs
hyderabad, chennai, gurugram
Work from Office
Head of Engineering, AI Safety Services Enable responsible AI innovation at the speed of progress. The impact youll make Accelerate responsible innovation by providing world-class AI safety services that allow our customers to quickly deploy new models and applications without compromising on safety. Influence the industry by establishing best practices, scalable tools, and clear benchmarks that set the standard for AI safety. Drive customer success through tools and platforms that integrate seamlessly into customer workflows, enabling safe and rapid innovation. What youll do Lead and inspire your team (AI engineers, platform engineers, and a product manager): Set clear goals, mentor team members, and foster a culture of impact, collaboration, and rapid learning. Implement agile processes that prioritize speed without sacrificing quality or security. Collaborate closely with customers to understand and address their AI safety challenges: Engage directly with customer teams to understand their innovation objectives, pain points, and safety requirements. Translate customer needs into actionable roadmaps aligned with their innovation cycles. Deliver tools and services that ensure AI applications are aligned, robust, interpretable, and safe: Oversee the development of platforms for alignment assessments, adversarial testing, drift detection, and interpretability analysis. Make informed build vs. buy decisions to optimize speed and customer integration. Champion engineering excellence in a high-stakes environment: Establish and enforce secure coding practices, automation, monitoring, and infrastructure-as-code. Ensure system reliability through defined SLAs, SLOs, rigorous testing, and clear operational dashboards. Continuously innovate within AI safety methodologies: Keep current with cutting-edge research in AI safety, adversarial testing, and model evaluation. Pilot emerging techniques that enhance our offerings and accelerate customer deployments. Experiences you'll bring 5+ years of experience in software engineering, with at least 2 years managing technical teams or complex projects. Proven track record delivering AI, ML, or data-intensive platforms or services (e.g., model evaluation platforms, MLOps systems, AI safety tools). Demonstrated success in team leadership, coaching, and talent development. Excellent cross-functional collaboration and communication skills, bridging technical concepts with customer value. Technical skills you'll need Proficiency with common software languages, frameworks, and tools such as Python, TensorFlow, PyTorch, Docker, Kubernetes, JavaScript, HTML/CSS, REST APIs, and SQL/noSQL databases Expertise with cloud environments such as AWS, GCP, or Azure. Familiarity with LLM architectures, including evaluation techniques, prompt engineering, and adversarial testing methodologies. Nice to have : exposure to platforms like LangChain, LangGraph; experience with distributed systems (Spark/Flink); familiarity with AI regulatory frameworks or secure deployment standards (e.g., SOC 2, ISO 27001). Why youll love this role Enjoy the freedom to work from anywhere your productivity, your environment Purpose-driven work: Contribute meaningfully to a safer AI future while enabling groundbreaking innovation. Strategic visibility: Directly impact high-profile AI safety initiatives with industry-leading customers. Growth opportunities: Collaborate with top-tier AI talent and help shape an emerging industry. Supportive culture: Enjoy competitive compensation, flexible work arrangements, and significant investment in your professional and personal growth. Location - Chennai,Gurugram,Hyderabad,Indore,Mumbai,Noida
Posted 1 hour ago
5.0 - 10.0 years
40 - 50 Lacs
hyderabad, chennai
Hybrid
Role & responsibilities Roles - Apache Flink Engineer Primary Skills - Apache Flink, Apache Kafka & Spark (Data Engineer background profiles only) Experience - 5 to 12 years Shift - 12PM to 9PM IST Notice Period - Immediate to 30 days only Locations - Chennai & Hyderbad only
Posted 2 hours ago
3.0 - 5.0 years
8 - 12 Lacs
bengaluru
Work from Office
The Oracle Cloud Infrastructure (OCI) team can provide you the opportunity to build and operate a suite of massive scale, integrated cloud services in a broadly distributed, multi-tenant cloud environment. OCI is committed to providing the best in cloud products that meet the needs of our customers who are tackling some of the worlds biggest challenges. We offer unique opportunities for smart, hands-on engineers with the expertise and passion to solve difficult problems in distributed highly available services and virtualised infrastructure. At every level, our engineers have a significant technical and business impact designing and building innovative new systems to power our customers business critical applications. Oracle Cloud Infrastructure (OCI) Security Platform & Compliance products team help customers protect their business-critical cloud infrastructure and data. We build cloud native security & compliance solutions that provide customers with visibility into the security posture of their cloud assets and help automate remediation where possible. This role provides a fantastic opportunity to build an analytics solution and a data lake by sourcing and curating data from various internal + external providers. We leverage Kafka, Spark, Machine Learning, technologies running on OCI. Youll work with product managers, designers, and engineers to build data driven features. You must enjoy the excitement of agile development and interacting with other exceptional engineers. Desired Skills and Experience: 4+ years of hands-on large-scale cloud application software development 1+ years of experience in cloud infrastructure security and risk assessment 1+ years of hands-on experience with three of the following technologies: Kafka, Spark, AWS/OCI, Kubernetes, Rest APIs, Linux 1+ year of experience using and building highly available streaming data solutions like Flink or Spark Streaming 1+ years of experience building application on OCI, AWS, Azure or GCP cloud Experience with development methodology with short release cycles Excellent problem solving and communication skills with both technical and non-technical audiences Optional Skills: Working knowledge of SSL, authentication, encryption, audit logging & access policies. Responsibilities As a member of the software engineering division, you will take an active role in the definition and evolution of standard practices and procedures. You will be responsible for defining and developing software for tasks associated with the developing, designing and debugging of software applications or operating systems. Your day to day responsibilities will include: Develop highly available and scalable platform that aggregates and analyses large volume of data Design, deploy and manage large scale distributed systems and services built on OCI Develop test bed and tools to help avoid regressions Introduce observability and issue detection capabilities in the code Track down complex data and engineering issues, and analyze logs and data to solve problems
Posted 5 hours ago
3.0 - 6.0 years
8 - 12 Lacs
bengaluru
Work from Office
The Oracle Cloud Infrastructure (OCI) team can provide you the opportunity to build and operate a suite of massive scale, integrated cloud services in a broadly distributed, multi-tenant cloud environment. OCI is committed to providing the best in cloud products that meet the needs of our customers who are tackling some of the worlds biggest challenges. We offer unique opportunities for smart, hands-on engineers with the expertise and passion to solve difficult problems in distributed highly available services and virtualised infrastructure. At every level, our engineers have a significant technical and business impact designing and building innovative new systems to power our customers business critical applications. Oracle Cloud Infrastructure (OCI) Security Platform & Compliance products team help customers protect their business-critical cloud infrastructure and data. We build cloud native security & compliance solutions that provide customers with visibility into the security posture of their cloud assets and help automate remediation where possible. This role provides a fantastic opportunity to build an analytics solution and a data lake by sourcing and curating data from various internal + external providers. We leverage Kafka, Spark, Machine Learning, technologies running on OCI. Youll work with product managers, designers, and engineers to build data driven features. You must enjoy the excitement of agile development and interacting with other exceptional engineers. Desired Skills and Experience: 4+ years of hands-on large-scale cloud application software development 1+ years of experience in cloud infrastructure security and risk assessment 1+ years of hands-on experience with three of the following technologies: Kafka, Spark, AWS/OCI, Kubernetes, Rest APIs, Linux 1+ year of experience using and building highly available streaming data solutions like Flink or Spark Streaming 1+ years of experience building application on OCI, AWS, Azure or GCP cloud Experience with development methodology with short release cycles Excellent problem solving and communication skills with both technical and non-technical audiences Optional Skills: Working knowledge of SSL, authentication, encryption, audit logging & access policies. Responsibilities As a member of the software engineering division, you will take an active role in the definition and evolution of standard practices and procedures. You will be responsible for defining and developing software for tasks associated with the developing, designing and debugging of software applications or operating systems. Your day to day responsibilities will include: Develop highly available and scalable platform that aggregates and analyses large volume of data Design, deploy and manage large scale distributed systems and services built on OCI Develop test bed and tools to help avoid regressions Introduce observability and issue detection capabilities in the code Track down complex data and engineering issues, and analyse logs and data to solve problems
Posted 5 hours ago
3.0 - 5.0 years
14 - 19 Lacs
bengaluru
Work from Office
Oracle Cloud Infrastructure (OCI) is a set of complementary cloud services that enable customers to build and run a range of applications and services in a highly available hosted environment. OCI provides high-performance compute capabilities (as physical hardware instances) and storage capacity in a flexible overlay virtual network that is securely accessible from customer on-premises network. Our customers run their businesses on our cloud, and our mission is to provide them with best-in-class compute, storage, networking, database, security, and an ever-expanding set of foundational cloud-based services. OCIs Security Posture and Compliance Platform team is developing an advanced Security Assurance platform to enhance the efficiency of cloud security operations. Security Assurance Platform is a comprehensive solution designed to proactively assess, monitor, and enforce security best practices and patterns across cloud environments. It automates compliance checks, identifies configuration drift, and provides real-time visibility into security posture at scale. We offer wide range of technical challenges to solve and opportunities for customer-focused innovations in the Cloud Security domain and unique opportunities to solve difficult problems in distributed highly available services and virtualised infrastructure. Our engineers have a significant technical and business impact designing and building innovative new systems to power our customers business critical applications Responsibilities Required Qualifications - 4 to 6 years of experience in Software Engineering in solving real-world problems.- Proficient with Java, REST, JUnit, SQL, Microservices, Cloud- Proficient in SQL Queries with Oracle or any Database technologies. - Hands-on experience in Software Design and Development in Java / Jave framework like Spring/Drop-wizard- Foundational knowledge of any streaming technology like Flink, Spark etc- Ability Collaborate with counterparts to understand requirements in detail and come up with design, and task breakdown.- Write optimised code with quality that are performant, secure ,reliable, scalable- Possesses excellent communication skills, both written and verbal- Self-managed and independent in day to day execution Preferred Qualifications - Hands-on experience developing and maintaining services on a public cloud platform (e.g., AWS, Azure, Oracle)- Familiarity with containerisation and orchestration tools (e.g., Docker, Kubernetes)- Experience in any scripting languages like Python, Ruby, BASH Responsibilities Development: Design, development and delivering high-scale, high-impact solutions Problem Solving: Analyze and solve complex technical problems Implement effective and efficient solutions. Identify bugs and performance issues in the software development process. Comfortable with ambiguity in a chaotic and fluid environment Collaboration: Working closely with cross-functional teams, including product managers, designers, and other. Have excellent communication skills. You can clearly explain complex technical concepts and constantly striving for excellence Coding: Writing efficient, reusable, and scalable code in languages relevant to the project requirements. Innovation & continuous Learning: Staying updated on emerging technologies and proposing innovative solutions to enhance product functionality. Quality: Ensuring the quality of code through testing, code reviews, and adherence to coding standards. Disciplined engineer who understands the importance of high standards, never satisfied with mediocrity Documentation: Creating and maintaining technical documentation for code, processes, and system architecture. Customer focus: Obsessed with the customer, always exceeding expectations
Posted 5 hours ago
3.0 - 5.0 years
10 - 15 Lacs
bengaluru
Work from Office
The Oracle Cloud Infrastructure (OCI) team can provide you the opportunity to build and operate a suite of massive scale, integrated cloud services in a broadly distributed, multi-tenant cloud environment. OCI is committed to providing the best in cloud products that meet the needs of our customers who are tackling some of the worlds biggest challenges. We offer unique opportunities for smart, hands-on engineers with the expertise and passion to solve difficult problems in distributed highly available services and virtualised infrastructure. At every level, our engineers have a significant technical and business impact designing and building innovative new systems to power our customers business critical applications. Oracle Cloud Infrastructure (OCI) Security Platform & Compliance products team help customers protect their business-critical cloud infrastructure and data. We build cloud native security & compliance solutions that provide customers with visibility into the security posture of their cloud assets and help automate remediation where possible. This role provides a fantastic opportunity to build an analytics solution and a data lake by sourcing and curating data from various internal + external providers. We leverage Kafka, Spark, Machine Learning, technologies running on OCI. Youll work with product managers, designers, and engineers to build data driven features. You must enjoy the excitement of agile development and interacting with other exceptional engineers. Desired Skills and Experience: 4+ years of hands-on large-scale cloud application software development 1+ years of experience in cloud infrastructure security and risk assessment 1+ years of hands-on experience with three of the following technologies: Kafka, Spark, AWS/OCI, Kubernetes, Rest APIs, Linux 1+ year of experience using and building highly available streaming data solutions like Flink or Spark Streaming 1+ years of experience building application on OCI, AWS, Azure or GCP cloud Experience with development methodology with short release cycles Excellent problem solving and communication skills with both technical and non-technical audiences Optional Skills: Working knowledge of SSL, authentication, encryption, audit logging & access policies.
Posted 5 hours ago
6.0 - 11.0 years
16 - 20 Lacs
hyderabad
Work from Office
As part of market leading ERP Cloud, Oracle ERP Cloud Integration & Functional Architecture teamoffers a broad suite of modules and capabilities designed to empower modern financeand deliver customer success with streamlined processes, increased productivity, and improved business decisions. The ERP Cloud Integration & Functional Architecture is looking for passionate, innovative, high caliber, team oriented super stars that seek being a major part of a transformative revolution in the development of modern business cloud based applications. We are seeking highly capable, best in the world developers, architects and technical leaders at the very top of the industry in terms of skills, capabilities and proven delivery; who seek out and implement imaginative and strategic, yet practical, solutions; people who calmly take measured and necessary risks while putting customers first. What Youll Do You would work as a Principal Applications Engineer on Oracle Next Gen solutions developed and running on Oracle Cloud. Design and build distributed, scalable, fault-tolerant software systems Build cloud services on top of modern Oracle Cloud Infrastructure Collaborate with product managers and other stakeholders in understanding the requirements and work on delivering user stories/backlog items with highest levels of quality and consistency across the product. Work with geographically dispersed team of engineers by taking complete ownership and accountability to see the project through for completion. Skills and Qualifications 7+ years in building and architecting enterprise and consumer grade applications You have prior experience working on distributed systems in the Cloud world with full stack experience. Build and Delivery of a high-quality cloud service with the capabilities, scalability, and performance needed to match the needs of enterprise teams Take the initiative and be responsible for delivering complex software by working effectively with the team and other stakeholders You feel at home communicating technical ideas verbally and in writing (technical proposals, design specs, architecture diagrams, and presentations) You are ideally proficient in Java, J2EE, SQL and server-side programming. Proficiency in other languages like Javascript is preferred. You have experience with Cloud Computing, System Design, and Object-Oriented Design preferably, production experience with Cloud. Experience working in the Apache Hadoop Community and more broadly the Big Data ecosystem communities (e.g. Apache Spark, Kafka, Flink etc.) Experience on Cloud native technologies such as AWS/Oracle Cloud/Azure/Google Cloud etc Experience building microservices and RESTful services and deep understanding of building cloud-based services Experienced at building highly available services, possessing knowledge of common service-oriented design patterns and service-to-service communication protocols Knowledge of Docker/Kubernetes is preferred Ability to work creatively and analytically using data-driven decision-making to improve customer experience. Strong organizational, interpersonal, written and oral communication skills, with proven success in contributing in a collaborative, team-oriented environment. Selfmotivated and self-driven, continuously learning and capable of working independently. BS/MS (MS preferred) in Computer Science. Analyze, design develop, troubleshoot and debug software programs for commercial or end user applications. Writes code, completes programming and performs testing and debugging of applications.As a member of the software engineering division, you will analyze and integrate external customer specifications. Specify, design and implement modest changes to existing software architecture. Build new products and development tools. Build and execute unit tests and unit test plans. Review integration and regression test plans created by QA. Communicate with QA and porting engineering to discuss major changes to functionality.Work is non-routine and very complex, involving the application of advanced technical/business skills in area of specialization. Leading contributor individually and as a team member, providing direction and mentoring to others. BS or MS degree or equivalent experience relevant to functional area. 7 years of software engineering or related experience.
Posted 5 hours ago
7.0 - 10.0 years
22 - 27 Lacs
bengaluru
Work from Office
As a Technical Project Manager, you will lead the delivery of end-to-end ETL projects using AWS and Spark. You will be responsible for managing a cross-functional agile team, ensuring successful project execution, and communicating with stakeholders to meet business objectives. You will need a strong technical understanding of data engineering, cloud platforms, and the ETL lifecycle to make informed decisions and drive delivery at a fast pace. Key Responsibilities Project Leadership: Lead and manage multiple projects, ensuring successful delivery within time, budget, and scope. Agile Methodology: Facilitate agile ceremonies (daily stand-ups, sprint planning, sprint reviews, retrospectives) to drive team alignment, productivity, and collaboration. Stakeholder Management: Act as the primary point of contact for stakeholders, ensuring regular updates on project status, risks, and deliverables. Manage expectations and ensure alignment on business and technical requirements. Technical Oversight: Collaborate with technical teams, understanding the intricacies of data pipeline design and architecture. Ensure the team adheres to best practices in AWS and Spark-based ETL development. Risk Management: Identify project risks early and develop mitigation strategies. Monitor progress and remove any roadblocks to project success. Team Collaboration: Work closely with data engineers, architects, and other stakeholders to design, build, and deploy scalable ETL solutions on AWS. Continuous Improvement: Foster a culture of continuous improvement, ensuring the team constantly enhances processes, methodologies, and tools. Quality Assurance: Ensure that the ETL processes are optimized for performance, scalability, and reliability, meeting high-quality standards. Required Skills and Qualifications Experience: 7+ years of project management experience, focusing on delivering data engineering and ETL projects using AWS and Spark. Technical Expertise: Strong knowledge of AWS services (e.g., S3, EMR, Lambda, Glue, Redshift) and Apache Spark. Experience with data pipelines, ETL frameworks, and distributed data processing. Agile Methodology: Solid understanding of agile frameworks (Scrum, Kanban) and hands-on experience leading agile teams. Stakeholder Management: Proven ability to communicate effectively with technical and non-technical stakeholders, managing expectations and delivering successful outcomes. Problem-Solving: Strong analytical and problem-solving skills, with the ability to think strategically and troubleshoot complex issues in ETL pipelines. Leadership: Experience leading teams, mentoring, and guiding engineers through complex technical challenges while ensuring timely delivery of projects. Certifications: PMP, Scrum Master, or other relevant certifications are a plus. Preferred Qualifications Experience with other data processing frameworks like Apache Flink or Hadoop or Glue. Experience with monitoring, logging, and optimization tools in AWS. Understanding of data warehousing concepts and tools (e.g., Amazon Redshift, Snowflake). KeywordsETL,Scrum planning,Agile Methodology,Risk Management,S3,EMR,Lambda,Glue,Redshift,Project Management*,AWS*Mandatory Key SkillsETL,Scrum planning,Agile Methodology,Risk Management,S3,EMR,Lambda,Glue,Redshift,Project Management*,AWS*
Posted 3 days ago
4.0 - 9.0 years
5 - 9 Lacs
bengaluru
Work from Office
Job TitleSenior DataLake Implementation Specialist Experience: 1012+ Years Location: Bangalore Type: Full-time / Contract Notice Period: Immediate Job Summary: We are looking for a highly experienced and sharp DataLake Implementation Specialist to lead and execute scalable data lake projects using technologies such as Apache Hudi, Hive, Python, Spark, Flink , and cloud-native tools on AWS or Azure . The ideal candidate must have deep expertise in designing and optimizing modern data lake architectures with strong programming skills and data engineering capabilities. Key Responsibilities: Design, develop, and implement robust data lake architectures on cloud platforms (AWS/Azure). Implement streaming and batch data pipelines using Apache Hudi , Apache Hive, and cloud-native services like AWS Glue , Azure Data Lake , etc. Architect and optimize ingestion, compaction, partitioning, and indexing strategies in Apache Hudi . Develop scalable data transformation and ETL frameworks using Python , Spark , and Flink . Work closely with DataOps/DevOps to build CI/CD pipelines and monitoring tools for data lake platforms. Ensure data governance, schema evolution handling, lineage tracking, and compliance. Collaborate with analytics and BI teams to deliver clean, reliable, and timely datasets. Troubleshoot performance bottlenecks in big data processing workloads and pipelines. Must-Have Skills: 4+ years hands-on experience in Data Lake and Data Warehousing solutions 3+ years experience with Apache Hudi , including insert/upsert/delete workflows, clustering, and compaction strategies Strong hands-on experience in AWS Glue , AWS Lake Formation , or Azure Data Lake / Synapse 6+ years of coding experience in Python , especially in data processing 2+ years working experience in Apache Flink and/or Apache Spark Sound knowledge of Hive , Parquet/ORC formats , and DeltaLake vs Hudi vs Iceberg Strong understanding of schema evolution , data versioning , and ACID guarantees in data lakes Nice to Have: Experience with Apache Iceberg , Delta Lake Familiarity with Kinesis , Kafka , or any streaming platform Exposure to dbt , Airflow , or Dagster Experience in data cataloging , data governance tools , and column-level lineage tracking Education & Certifications: Bachelors or Masters degree in Computer Science, Information Technology, or related field Relevant certifications in AWS Big Data , Azure Data Engineering , or Databricks
Posted 3 days ago
1.0 - 5.0 years
5 - 9 Lacs
bengaluru
Work from Office
About The Role Project Role : Industry Subject Matter Advisor Project Role Description : Work closely with client project teams to provide expertise (functional, technical, industry, tools/methods) to ensure successful solution design and delivery. Must have skills : Java Enterprise Edition Good to have skills : Spring Boot Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Job Title:Java + SpringBootPosition OverviewWe are seeking a highly skilled Java + Spring Boot to lead the design and implementation of scalable, resilient, and secure enterprise applications. The ideal candidate will have strong proficiency in Java/Spring Boot microservices, deep expertise in event streaming and data processing using Confluent Kafka and Apache Flink, and hands-on experience architecting solutions on Azure cloud. This role requires a balance of technical leadership, architecture design, and hands-on coding.Key ResponsibilitiesArchitect and deploy containerized applications on Azure Kubernetes Service (AKS), leveraging Azure Container Registry and Azure Key Vault.Establish data-driven architectures, integrating with NoSQL (Cassandra) and other storage systems like Azure Blob Storage.Define and own the architecture for event-driven and microservices-based applications on Azure.Design and implement event streaming & processing solutions using Confluent Kafka (including KSQL & Schema Registry) and Apache Flink.Collaborate with product teams to translate business requirements into scalable technical solutions.Ensure reliability, performance, and security across services and integrations.Define best practices for CI/CD, monitoring, and observability using tools such as Splunk and Dynatrace.Mentor development teams on modern engineering practices, architecture principles, and Azure cloud adoption.Required Skills & ExperienceCloud & Infrastructure (Azure)Hands-on experience with Azure Kubernetes Service (AKS) for container orchestration.Knowledge of Azure Container Registry, Azure Blob Storage, and Azure Key Vault.Exposure to Confluent Cloud for enterprise-grade Kafka solutions.Programming & FrameworksStrong proficiency in Core Java.Experience in Spring Boot, Spring REST for building microservices.ArchitectureProven experience designing microservices-based architecture.Expertise in event-driven and data-driven architecture.Strong background in designing scalable, secure, and resilient distributed systems.Event Streaming & ProcessingExpertise with Confluent Kafka, including KSQL and Schema Registry.Strong knowledge of Apache Flink for real-time data stream processing.Persistence LayerExperience with NoSQL databases, specifically Cassandra.Monitoring & ObservabilityWorking knowledge of Splunk and Dynatrace for application monitoring and logging.Preferred QualificationsExperience with API Gateways, service mesh, or Dapr.Exposure to DevOps practices including CI/CD pipelines on Azure DevOps or GitHub Actions.Strong knowledge of security best practices in cloud-native applications.Prior experience in leading architecture reviews and governance.Soft SkillsStrong problem-solving and analytical skills.Ability to lead and mentor technical teams.Excellent communication and stakeholder management.Self-driven, with the ability to work in a fast-paced environment.EducationBachelors or Masters degree in Computer Science, Engineering, or related field.Relevant Azure certifications (e.g., Azure Solutions Architect Expert, Azure Kubernetes Service Specialist) preferred. Qualification 15 years full time education
Posted 3 days ago
3.0 - 8.0 years
9 - 13 Lacs
hyderabad, chennai, gurugram
Work from Office
The impact youll make Enable rapid, responsible releases by coding evaluation pipelines that surface safety issues before models reach production. Drive operational excellence through reliable, secure, and observable systems that safety analysts and customers trust. Advance industry standards by contributing to bestpractice libraries, opensource projects, and internal frameworks for testing alignment, robustness, and interpretability. What youll do Design & implement safety toolingfrom automated adversarial test harnesses to driftmonitoring dashboardsusing Python, PyTorch/TensorFlow, and modern cloud services. Collaborate across disciplines with fellow engineers, data scientists, and product managers to translate customer requirements into clear, iterative technical solutions. Own quality endtoend: write unit/integration tests, automate CI/CD, and monitor production metrics to ensure reliability and performance. Containerize and deploy services using Docker and Kubernetes, following infrastructureascode principles (Terraform/CDK). Continuously learn new evaluation techniques, model architectures, and security practices; share knowledge through code reviews and technical talks. Experiences youll bring 3+ years of professional software engineering experience, ideally with dataintensive or MLadjacent systems. Demonstrated success shipping production code that supports highavailability services or platforms. Experience working in an agile, collaborative environment, delivering incremental value in short cycles. Technical skills you'll need Languages & ML frameworks: Strong Python plus handson experience with PyTorch or TensorFlow (bonus points for JAX and LLM finetuning). Cloud & DevOps: Comfortable deploying containerized services (Docker, Kubernetes) on AWS, GCP, or Azure; infrastructureascode with Terraform or CDK. MLOps & experimentation: Familiar with tools such as MLflow, Weights & Biases, or SageMaker Experiments for tracking runs and managing models. Data & APIs: Solid SQL, exposure to at least one NoSQL store, and experience designing or consuming RESTful APIs. Security mindset: Awareness of secure coding and compliance practices (e.g., SOC 2, ISO 27001). Nice to have: LangChain/LangGraph, distributed processing (Spark/Flink), or prior contributions to opensource ML safety projects. Why youll love this role Enjoy the freedom to work from anywhere your productivity, your environment Purpose-driven work: Contribute meaningfully to a safer AI future while enabling groundbreaking innovation. Strategic visibility: Directly impact high-profile AI safety initiatives with industry-leading customers. Growth opportunities: Collaborate with top-tier AI talent and help shape an emerging industry. Supportive culture: Enjoy competitive compensation, flexible work arrangements, and significant investment in your professional and personal growth. Location - Chennai,Gurugram,Hyderabad,Indore,Mohali,Mumbai,Noida
Posted 3 days ago
8.0 - 12.0 years
20 - 25 Lacs
bengaluru
Work from Office
Senior Java Engineer with 8+ years of experience in backend development using Java, Spring Boot, Microservices, and event-driven systems. Expertise with Apache Flink for real-time data processing.Strong experience with REST APIs, SQL, AWS/GCP
Posted 3 days ago
1.0 - 6.0 years
15 - 25 Lacs
bengaluru
Work from Office
We have developed API gateway aggregators using frameworks like Hystrix and spring-cloud-gateway for circuit breaking and parallel processing. Our serving microservices handle more than 15K RPS on normal days and during saledays this can go to 30K RPS. Being a consumer app, these systems have SLAs of ~10ms Our distributed scheduler tracks more than 50 million shipments periodically fromdifferent partners and does async processing involving RDBMS. We use an in-house video streaming platform to support a wide variety of devices and networks. What Youll Do Design and implement scalable and fault-tolerant data pipelines (batch and streaming) using frameworks like Apache Spark , Flink , and Kafka . Lead the design and development of data platforms and reusable frameworks that serve multiple teams and use cases. Build and optimize data models and schemas to support large-scale operational and analytical workloads. Deeply understand Apache Spark internals and be capable of modifying or extending the open-source Spark codebase as needed. Develop streaming solutions using tools like Apache Flink , Spark Structured Streaming . Drive initiatives that abstract infrastructure complexity , enabling ML, analytics, and product teams to build faster on the platform. Champion a platform-building mindset focused on reusability , extensibility , and developer self-service . Ensure data quality, consistency, and governance through validation frameworks, observability tooling, and access controls. Optimize infrastructure for cost, latency, performance , and scalability in modern cloud-native environments . Mentor and guide junior engineers , contribute to architecture reviews, and uphold high engineering standards. Collaborate cross-functionally with product, ML, and data teams to align technical solutions with business needs. What Were Looking For 5-8 years of professional experience in software/data engineering with a focus on distributed data systems . Strong programming skills in Java , Scala , or Python , and expertise in SQL . At least 2 years of hands-on experience with big data systems including Apache Kafka , Apache Spark/EMR/Dataproc , Hive , Delta Lake , Presto/Trino , Airflow , and data lineage tools (e.g., Datahb,Marquez, OpenLineage). Experience implementing and tuning Spark/Delta Lake/Presto at terabyte-scale or beyond. Strong understanding of Apache Spark internals (Catalyst, Tungsten, shuffle, etc.) with experience customizing or contributing to open-source code. Familiarity and worked with modern open-source and cloud-native data stack components such as: Apache Iceberg , Hudi , or Delta Lake Trino/Presto , DuckDB , or ClickHouse,Pinot ,Druid Airflow , Dagster , or Prefect DBT , Great Expectations , DataHub , or OpenMetadata Kubernetes , Terraform , Docker Strong analytical and problem-solving skills , with the ability to debug complex issues in large-scale systems. Exposure to data security, privacy, observability , and compliance frameworks is a plus. Good to Have Contributions to open-source projects in the big data ecosystem (e.g., Spark, Kafka, Hive, Airflow) Hands-on data modeling experience and exposure to end-to-end data pipeline development Familiarity with OLAP data cubes and BI/reporting tools such as Tableau, Power BI, Superset, or Looker Working knowledge of tools and technologies like ELK Stack (Elasticsearch, Logstash, Kibana) , Redis , and MySQL Exposure to backend technologies including RxJava , Spring Boot , and Microservices architecture
Posted 3 days ago
5.0 - 10.0 years
17 - 25 Lacs
chennai
Work from Office
Skill Set:- Scala Java Flink Beam Kafka
Posted 4 days ago
5.0 - 9.0 years
5 - 10 Lacs
chennai
Work from Office
*Minimum 4+ years of development and design experience in Java/Scala with Flink, Beam (or spark streaming) and Kafka *Experience in JVM tuning for performance *Working on Caching systems, with experience using Apache Ignite will be preferable.
Posted 4 days ago
4.0 - 8.0 years
11 - 16 Lacs
noida
Work from Office
Job Summary: We are seeking a highly skilled and experienced Lead Data Engineer to join our data engineering team. The ideal candidate will have a strong background in designing and deploying scalable data pipelines using Azure technologies, Spark, Flink, and modern data lakehouse architectures. This role demands hands-on technical expertise, leadership in managing offshore teams, and a strategic mindset to drive data-driven decision-making across financial and regulatory domains Key Responsibilities: Design, develop, and deploy scalable batch and streaming data pipelines using PySpark, Flink, Scala, SQL, and Redis. Lead migration of complex on-premise workflows to Azure cloud ecosystem (Databricks, ADLS, Azure Data Factory), optimizing infrastructure and deployment processes. Implement performance tuning strategies to reduce job runtimes and enhance data reliability, including optimization of Unity Catalog tables. Collaborate with product stakeholders to deliver high-priority data features and ensure alignment with business goals. Manage and mentor an 8-member offshore team, fostering best practices in data engineering and agile development. Conduct internal training sessions on modern data architecture, cloud-native deployments, and data engineering best practices. Required Skills & Technologies: Big Data Tools: PySpark, Spark, Flink, Hive, Hadoop, Delta Lake, Streaming, ETL Cloud Platforms: Azure (ADF, Databricks, ADLS, Event Hub), AWS (S3) Orchestration & DevOps: Airflow, Docker, Kubernetes, GitHub Actions, Jenkins Programming Languages: Python, Scala, SQL, Shell Other Tools: Redis, Solace, MQ, Kafka, Grafana, Postman Soft Skills: Team Leadership, Agile Methodologies, Stakeholder Management, Technical Training Certifications (Good to have): Databricks Certified: Data Engineer Associate, Lakehouse Fundamentals Microsoft Certified: Azure Fundamentals (AZ-900), Azure Data Fundamentals (DP-900) Preferred Qualifications: Bachelors degree in Engineering (E.C.E.) with strong academic performance.Proven experience in financial data pipelines, regulatory reporting, and risk analytics Mandatory Competencies Big Data - Big Data - Pyspark Big Data - Big Data - SPARK Beh - Communication Big Data - Big Data - HIVE Programming Language - Scala - Scala Big Data - Big Data - Hadoop DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - Containerization (Docker, Kubernetes) DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - Jenkins Cloud - AWS - AWS S3, S3 glacier, AWS EBS DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - GitLab,Github, Bitbucket
Posted 4 days ago
8.0 - 13.0 years
41 - 46 Lacs
pune
Work from Office
Overview We're seeking an exceptional Principal AI Engineer to join our AI team and shape the development of cutting-edge solutions that address MSCI's most complex analytical challenges. In this role, you'll architect and implement advanced AI systems that transform how global investors make critical decisions by providing sophisticated insights across financial markets, risk assessment, and portfolio analytics. The ideal candidate brings deep expertise in AI/ML, proven experience in delivering enterprise-scale AI solutions, and a passion for leveraging technology to solve multifaceted problems in the financial data ecosystem. As a senior individual contributor, you'll work closely with cross-functional teams to design innovative AI-powered platforms that process vast amounts of heterogeneous data, generate actionable insights, and enable data-driven decision-making at scale. You'll serve as a technical expert, establishing best practices and driving the adoption of state-of-the-art AI technologies while ensuring our solutions meet the rigorous standards required by institutional investors globally. This role offers the unique opportunity to work on diverse, high-impact AI applications that shape how the world's largest financial institutions understand and navigate complex market dynamics. Responsibilities Design and implement enterprise-scale AI architectures for processing and analyzing complex, multi-dimensional financial and alternative data Architect end-to-end ML pipelines that handle multi-modal data sources including structured market data, unstructured documents, and real-time feeds Develop advanced NLP systems for extracting and validating critical metrics from millions of documents, reports, and regulatory filings Apply cutting-edge AI techniques (including LLMs, computer vision, and deep learning) to solve MSCI's diverse AI use cases Implement MLOps best practices and automated systems for model deployment, monitoring, and continuous improvement across multiple product lines Collaborate with product teams and domain experts to translate complex requirements into scalable technical solutions Provide technical guidance and mentorship to other engineers on AI/ML best practices and architectural decisions Partner with MSCI Research teams to advance the state of AI applications in financial analytics Drive technical excellence through code reviews, design documentation, and knowledge sharing across the organization Help improve AI Engg team's productivty by using AI agents to automate tasks Qualifications 10+ years of experience building and deploying AI/ML systems at scale, with deep hands-on technical expertise Expert-level proficiency in modern AI/ML frameworks (TensorFlow, PyTorch) and distributed computing platforms (Flink, Beam, Spark, Ray) Proven track record of architecting and delivering enterprise AI platforms handling large scale data Advanced knowledge of NLP, computer vision, and time-series analysis techniques with practical application experience Experience with financial data, risk modeling, or quantitative analytics is highly desirable Expertise in building real-time streaming architectures using technologies like Kafka, Flink, or Beam Strong foundation in MLOps practices including model versioning, A/B testing, and automated retraining pipelines Proficiency in cloud platforms (GCP, Azure) and containerization technologies (Docker, Kubernetes) Excellent technical communication skills with ability to present complex AI concepts to diverse stakeholders Track record of publishing research or speaking at conferences on AI/ML topics Passion for solving complex analytical challenges and using technology to drive innovation in financial markets What we offer you Transparent compensation schemes and comprehensive employee benefits, tailored to your location, ensuring your financial security, health, and overall wellbeing. Flexible working arrangements, advanced technology, and collaborative workspaces. A culture of high performance and innovation where we experiment with new ideas and take responsibility for achieving results. A global network of talented colleagues, who inspire, support, and share their expertise to innovate and deliver for our clients. Global Orientation program to kickstart your journey, followed by access to our Learning@MSCI platform, LinkedIn Learning Pro and tailored learning opportunities for ongoing skills development. Multi-directional career paths that offer professional growth and development through new challenges, internal mobility and expanded roles. We actively nurture an environment that builds a sense of inclusion belonging and connection, including eight Employee Resource Groups. All Abilities, Asian Support Network, Black Leadership Network, Climate Action Network, Hola! MSCI, Pride & Allies, Women in Tech, and Women’s Leadership Forum. At MSCI we are passionate about what we do, and we are inspired by our purpose – to power better investment decisions. You’ll be part of an industry-leading network of creative, curious, and entrepreneurial pioneers. This is a space where you can challenge yourself, set new standards and perform beyond expectations for yourself, our clients, and our industry. MSCI is a leading provider of critical decision support tools and services for the global investment community. With over 50 years of expertise in research, data, and technology, we power better investment decisions by enabling clients to understand and analyze key drivers of risk and return and confidently build more effective portfolios. We create industry-leading research-enhanced solutions that clients use to gain insight into and improve transparency across the investment process. MSCI Inc. is an equal opportunity employer. It is the policy of the firm to ensure equal employment opportunity without discrimination or harassment on the basis of race, color, religion, creed, age, sex, gender, gender identity, sexual orientation, national origin, citizenship, disability, marital and civil partnership/union status, pregnancy (including unlawful discrimination on the basis of a legally protected parental leave), veteran status, or any other characteristic protected by law. MSCI is also committed to working with and providing reasonable accommodations to individuals with disabilities. If you are an individual with a disability and would like to request a reasonable accommodation for any part of the application process, please email Disability.Assistance@msci.com and indicate the specifics of the assistance needed. Please note, this e-mail is intended only for individuals who are requesting a reasonable workplace accommodation; it is not intended for other inquiries. To all recruitment agencies MSCI does not accept unsolicited CVs/Resumes. Please do not forward CVs/Resumes to any MSCI employee, location, or website. MSCI is not responsible for any fees related to unsolicited CVs/Resumes. Note on recruitment scams We are aware of recruitment scams where fraudsters impersonating MSCI personnel may try and elicit personal information from job seekers. Read our full note on careers.msci.com
Posted 4 days ago
5.0 - 10.0 years
15 - 30 Lacs
bengaluru
Work from Office
About the job About Us Diligent is the AI leader in governance, risk and compliance (GRC) SaaS solutions, helping more than 1 million users and 700,000 board members to clarify risk and elevate governance. The Diligent One Platform gives practitioners, the C-Suite and the board a consolidated view of their entire GRC practice so they can more effectively manage risk, build greater resilience and make better decisions, faster. At Diligent, we're building the future with people who think boldly and move fast. Whether you're designing systems that leverage large language models or part of a team reimaging workflows with AI, you'll help us unlock entirely new ways of working and thinking. Curiosity is in our DNA, we look for individuals willing to ask the big questions and experiment fearlessly - those who embrace change not as a challenge, but as an opportunity. The future belongs to those who keep learning, and we are building it together. At Diligent, youre not just building the future - youre an agent of positive change, joining a global community on a mission to make an impact. Learn more at diligent.com or follow us on LinkedIn and Facebook Position Overview We are looking for a Senior Data Engineer to lead the design and implementation of our data platform , enabling decentralized data ownership with federated governance across business domains. You will build scalable, self-service platforms that empower teams, integrate diverse data sources, and ensure governance, risk, and compliance (GRC) requirements are met. Responsibilities Design and implement domain-oriented, federated data architectures with clear APIs, SLAs, and ownership. Build self-serve data infrastructure platforms for discovery, access, and lifecycle management. Implement automated governance and compliance policies at scale. Lead development of high-performance pipelines, data catalogs, and monitoring systems. Integrate data from enterprise systems, cloud-native services, external APIs, IoT, and unstructured sources. Utilize MCP Server for orchestration and workflow control of pipelines and compliance processes. Mentor engineers and drive adoption of data-as-a-product thinking across teams. Core Qualifications Bachelor's degree in computer science, Engineering, or related field. 5-10 years in data engineering with increasing leadership responsibilities. 3+ years with distributed data architectures and multi-domain ecosystems. Strong expertise in Python, SQL, and distributed computing (Spark, Flink, Kafka). Experience with AWS (or multi-cloud), hands on experience on AWS services, setting up Containerize applications through IAC Tools like Terraform, CDK. Familiarity with data catalog/discovery tools (e.g., DataHub, Collibra), schema evolution, and data marketplaces. Hands-on experience with GRC-focused systems, MCP Server, policy automation and AI technology. Preferred Skills Knowledge of real-time analytics, MLOps, and self-healing pipelines. Experience in regulated industries and with privacy/regulatory standards (GDPR, CCPA). Exposure to emerging data technologies (e.g., Data Mesh platforms, decentralized protocols). What Diligent Offers You Creativity is ingrained in our culture. We are innovative collaborators by nature. We thrive in exploring how things can be differently both in our internal processes and to help our clients We care about our people. Diligent offers a flexible work environment, global days of service, comprehensive health benefits, meeting free days, generous time off policy and wellness programs to name a few We have teams all over the world. We may be headquartered in New York City, but we have office hubs in Washington D.C., Vancouver, London, Galway, Budapest, Munich, Bengaluru, Singapore, and Sydney. Diversity is important to us. Growing, maintaining and promoting a diverse team is a top priority for us. We foster and encourage diversity through our Employee Resource Groups and provide access to resources and education to support the education of our team, facilitate dialogue, and foster understanding. Diligent created the modern governance movement. Our world-changing idea is to empower leaders with the technology, insights and connections they need to drive greater impact and accountability to lead with purpose. Our employees are passionate, smart, and creative people who not only want to help build the software company of the future, but who want to make the world a more sustainable, equitable and better place. Headquartered in New York, Diligent has offices in Washington D.C., London, Galway, Budapest, Vancouver, Bengaluru, Munich, Singapore and Sydney. To foster strong collaboration and connection, this role will follow a hybrid work model. If you are within a commuting distance to one of our Diligent office locations, you will be expected to work onsite at least 50% of the time. We believe that in-person engagement helps drive innovation, teamwork, and a strong sense of community. We are a drug free workplace. Diligent is proud to be an equal opportunity employer. We do not discriminate based on race, color, religious creed, sex, national origin, ancestry, citizenship status, pregnancy, childbirth, physical disability, mental disability, age, military status, protected veteran status, marital status, registered domestic partner or civil union status, gender (including sex stereotyping and gender identity or expression), medical condition (including, but not limited to, cancer related or HIV/AIDS related), genetic information, or sexual orientation in accordance with applicable federal, state and local laws. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Diligent's EEO Policy and Know Your Rights . We are committed to providing reasonable accommodations for qualified individuals with disabilities and disabled veterans in our job application procedures. If you need assistance or an accommodation due to a disability, you may contact us at recruitment@diligent.com. To all recruitment agencies: Diligent does not accept unsolicited agency resumes. Please do not forward resumes to our jobs alias, Diligent employees or any other organization location. Diligent is not responsible for any fees related to unsolicited resumes.
Posted 4 days ago
3.0 - 8.0 years
9 - 13 Lacs
hyderabad, chennai, gurugram
Work from Office
The impact youll make Enable rapid, responsible releases by coding evaluation pipelines that surface safety issues before models reach production. Drive operational excellence through reliable, secure, and observable systems that safety analysts and customers trust. Advance industry standards by contributing to bestpractice libraries, opensource projects, and internal frameworks for testing alignment, robustness, and interpretability. What youll do Design & implement safety toolingfrom automated adversarial test harnesses to driftmonitoring dashboardsusing Python, PyTorch/TensorFlow, and modern cloud services. Collaborate across disciplines with fellow engineers, data scientists, and product managers to translate customer requirements into clear, iterative technical solutions. Own quality endtoend: write unit/integration tests, automate CI/CD, and monitor production metrics to ensure reliability and performance. Containerize and deploy services using Docker and Kubernetes, following infrastructureascode principles (Terraform/CDK). Continuously learn new evaluation techniques, model architectures, and security practices; share knowledge through code reviews and technical talks. Experiences youll bring 3+ years of professional software engineering experience, ideally with dataintensive or MLadjacent systems. Demonstrated success shipping production code that supports highavailability services or platforms. Experience working in an agile, collaborative environment, delivering incremental value in short cycles. Technical skills you'll need Languages & ML frameworks: Strong Python plus handson experience with PyTorch or TensorFlow (bonus points for JAX and LLM finetuning). Cloud & DevOps: Comfortable deploying containerized services (Docker, Kubernetes) on AWS, GCP, or Azure; infrastructureascode with Terraform or CDK. MLOps & experimentation: Familiar with tools such as MLflow, Weights & Biases, or SageMaker Experiments for tracking runs and managing models. Data & APIs: Solid SQL, exposure to at least one NoSQL store, and experience designing or consuming RESTful APIs. Security mindset: Awareness of secure coding and compliance practices (e.g., SOC 2, ISO 27001). Nice to have: LangChain/LangGraph, distributed processing (Spark/Flink), or prior contributions to opensource ML safety projects. Why youll love this role Enjoy the freedom to work from anywhere your productivity, your environment Purpose-driven work: Contribute meaningfully to a safer AI future while enabling groundbreaking innovation. Strategic visibility: Directly impact high-profile AI safety initiatives with industry-leading customers. Growth opportunities: Collaborate with top-tier AI talent and help shape an emerging industry. Supportive culture: Enjoy competitive compensation, flexible work arrangements, and significant investment in your professional and personal growth. Location - Chennai,Gurugram,Hyderabad,Indore,Mumbai,Noida
Posted 5 days ago
6.0 - 11.0 years
25 - 37 Lacs
bengaluru
Work from Office
Skills Required: Familiarity with data processing engines such as Apache Spark, Flink, or other big data tools. Design, develop, and implement robust data lake architectures on cloud platforms (AWS/Azure). Implement streaming and batch data pipelines using Apache Hudi, Apache Hive, and cloud-native services like AWS Glue, Azure Data Lake, etc. Architect and optimize ingestion, compaction, partitioning, and indexing strategies in Apache Hudi. Develop scalable data transformation and ETL frameworks using Python, Spark, and Flink. Work closely with DataOps/DevOps to build CI/CD pipelines and monitoring tools for data lake platforms. Ensure data governance, schema evolution handling, lineage tracking, and compliance. Sound knowledge of Hive, Parquet/ORC formats, and DeltaLake vs Hudi vs Iceberg Strong understanding of schema evolution, data versioning, and ACID guarantees in data lakes Collaborate with analytics and BI teams to deliver clean, reliable, and timely datasets. Troubleshoot performance bottlenecks in big data processing workloads and pipelines. Experience with data governance tools and practices, including data cataloging, data lineage, and metadata management. Strong understanding of data integration and movement between different storage systems (databases, data lakes, data warehouses). Strong understanding of API integration for data ingestion, including RESTful services and streaming data. Experience in data migration strategies, tools, and frameworks for moving data from legacy systems (on-premises) to cloud-based solutions. Proficiency with data warehousing solutions (e.g., Google BigQuery, Snowflake). Expertise in data modeling tools and techniques (e.g., SAP Datasphere, EA Sparx). Strong knowledge of SQL and NoSQL databases (e.g., MongoDB, Cassandra). Familiarity with cloud platforms (e.g., AWS, Azure, Google Cloud). Nice To Have Experience with Apache Iceberg, Delta Lake Familiarity with Kinesis, Kafka, or any streaming platform Exposure to dbt, Airflow, or Dagster Experience in data cataloging, data governance tools, and column-level lineage tracking
Posted 6 days ago
3.0 - 8.0 years
13 - 17 Lacs
hyderabad, chennai, gurugram
Work from Office
The impact youll make Enable rapid, responsible releases by coding evaluation pipelines that surface safety issues before models reach production. Drive operational excellence through reliable, secure, and observable systems that safety analysts and customers trust. Advance industry standards by contributing to bestpractice libraries, opensource projects, and internal frameworks for testing alignment, robustness, and interpretability. What youll do Design & implement safety toolingfrom automated adversarial test harnesses to driftmonitoring dashboardsusing Python, PyTorch/TensorFlow, and modern cloud services. Collaborate across disciplines with fellow engineers, data scientists, and product managers to translate customer requirements into clear, iterative technical solutions. Own quality endtoend: write unit/integration tests, automate CI/CD, and monitor production metrics to ensure reliability and performance. Containerize and deploy services using Docker and Kubernetes, following infrastructureascode principles (Terraform/CDK). Continuously learn new evaluation techniques, model architectures, and security practices; share knowledge through code reviews and technical talks. Experiences youll bring 3+ years of professional software engineering experience, ideally with dataintensive or MLadjacent systems. Demonstrated success shipping production code that supports highavailability services or platforms. Experience working in an agile, collaborative environment, delivering incremental value in short cycles. Technical skills you'll need Languages & ML frameworks: Strong Python plus handson experience with PyTorch or TensorFlow (bonus points for JAX and LLM finetuning). Cloud & DevOps: Comfortable deploying containerized services (Docker, Kubernetes) on AWS, GCP, or Azure; infrastructureascode with Terraform or CDK. MLOps & experimentation: Familiar with tools such as MLflow, Weights & Biases, or SageMaker Experiments for tracking runs and managing models. Data & APIs: Solid SQL, exposure to at least one NoSQL store, and experience designing or consuming RESTful APIs. Security mindset: Awareness of secure coding and compliance practices (e.g., SOC 2, ISO 27001). Nice to have: LangChain/LangGraph, distributed processing (Spark/Flink), or prior contributions to opensource ML safety projects. Why youll love this role Enjoy the freedom to work from anywhere your productivity, your environment Purpose-driven work: Contribute meaningfully to a safer AI future while enabling groundbreaking innovation. Strategic visibility: Directly impact high-profile AI safety initiatives with industry-leading customers. Growth opportunities: Collaborate with top-tier AI talent and help shape an emerging industry. Supportive culture: Enjoy competitive compensation, flexible work arrangements, and significant investment in your professional and personal growth. Location - Chennai,Gurugram,Hyderabad,Indore,Mohali,Mumbai,Noida
Posted 6 days ago
8.0 - 11.0 years
13 - 18 Lacs
bengaluru
Work from Office
Expert-level SQL skills, with deep understanding of performance tuning and optimization. Strong background in distributed processing frameworks (e.g., Apache Flink, Kafka, Spark). Hands-on experience with RisingWave in a production environment. Proven experience in leading technical teams and delivering complex data projects. Experience in a dba or real-time analytics role within an e-commerce environment. Strong communication skills and a proactive, collaborative mindset. Good to have hands on experience on DBT.
Posted 6 days ago
8.0 - 13.0 years
15 - 19 Lacs
bengaluru
Work from Office
Meet the Team Cisco Cloud Security Group is at the forefront of developing cloud-delivered security needs and challenges of our customers. The Cloud Security group focuses on developing cloud delivered security solutions (SaaS based) in a platform centric approach. This group was formed a couple of years ago by combining some of existing cloud assets Cisco had with two hugely successful acquisitions - OpenDNS and CloudLock. Our vision is to build the most complex security solutions in a cloud delivered way with utmost simplicity - disrupt industry's thinking around how deep and how broad a security solution can be while keeping it easy to deploy and simple to manage. We are at an exciting stage of this journey and looking for a passionate, innovative and action-oriented engineering leader to build next-gen cloud security solutions like Cloud Firewall, IPS, IDS etc. We are seeking a skilled developer to join our control plane team. The ideal candidate will design, develop, and maintain scalable software solutions that detect anomalies in large-scale data environments. This role requires strong programming skills, experience with machine learning or statistical anomaly detection techniques, and the ability to work collaboratively in a fast-paced environment. Your Impact : Develop, test, and deploy software components for anomaly detection systems. Collaborate with data scientists and engineers to implement machine learning models for anomaly detection. Optimize algorithms for real-time anomaly detection and alerting. Analyze large datasets to identify patterns and improve detection accuracy. Maintain and enhance existing anomaly detection infrastructure. Participate in code reviews, design discussions, and agile development processes. Troubleshoot and resolve issues related to anomaly detection applications. Document development processes, system designs, and operational procedures. Minimum Qualifications: Bachelor's degree in Computer Science, Software Engineering, or a related field. 8+ years of relevant industry experience on development 5+ Strong proficiency in programming languages such as Python, Java, Go. 5+ Experience with big data technologies (e.g., Hadoop, Spark, Flink) and data processing pipelines. Experience with machine learning frameworks and libraries (e.g., TensorFlow, PyTorch, scikit-learn). Knowledge of anomaly detection techniques and statistical analysis. Understanding of cloud platforms and Distributed Databases (Snowflake, Clickhouse) with experience in containerization. Hands on and sound knowledge in technologies such as Kafka, Kubernetes, NoSQL. Excellent problem-solving skills and attention to detail. Strong communication and teamwork abilities. Preferred Qualifications: Experience working on anomaly detection or cybersecurity systems. Background in data science, statistics, or related fields. Familiarity with monitoring and alerting tools. Experience with CI/CD pipelines and automated testing.
Posted 6 days ago
5.0 - 10.0 years
3 - 7 Lacs
bengaluru
Work from Office
Your Role and Responsibilities As a Technical Support Professional, you should have experience in a customer-facing leadership capacity. This role necessitates exceptional customer relationship management skills along with a solid technical grasp of the product/s they will support. The Technical Support Professional is expected to adeptly manage conflicting priorities, thrive under pressure, and autonomously navigate tasks with minimal active guidance. The successful applicant should possess a comprehensive understanding of IBM support, development, and service processes and deliveries. Knowledge of other IBM business procedures and professional training in mediation or conflict resolution would be advantageous. Your primary responsibilities include: Direct Problem-Solving Experience:Previous experience in addressing client issues is valuable, along with a demonstrated ability to effectively resolve problems. Strong Communication Skills: Ability to communicate clearly with both internal and external clients through spoken and written channels. Business Networking ExperienceIn-depth experience and understanding of the IBM and/or OEM support organizations, facilitating effective networking and collaboration. Excellent Coordination, Leadership & Organizational Skills: Exceptional coordination and organizational abilities, capable of leading diverse teams and multitasking within a team-based business network environment. Proficiency in project management is beneficial. Excellence in Client Service & Client Satisfaction:Personal commitment to pursuing client satisfaction and continuous improvement in the delivery of client problem resolution. Language Skills: Proficiency in English is required, with fluency in multiple languages considered advantageous. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Bachelor's Degree Experience5+ years Basic knowledge in Operating system administration (Windows, Linux) Basic knowledge in database administration (DB2, Oracle, MS SQL) EnglishFluent in speaking and writing Analytical thinking, structured problem-solving techniques Strong positive customer service attitude with sensitivity to client satisfaction. Must be a self-starter, quick learner, and enjoy working in a challenging, fast paced environment. Strong analytical and troubleshooting skills, including problem recreation, analyzing logs and traces, debugging complex issues to determine a course of action and recommend solutions. Preferred technical and professional experience Master's Degree in Information Technology Knowledge with OpenShift Knowledge with Apache Flink and Kafka Knowledge with Kibana Knowledge with Containerization and Kubernetes Knowledge with scripting (including Python, JavaScript) Knowledge with products of IBM's Digital Business Automation Product Family Knowledge with Process/Data Mining Knowledge with Containerization Basic knowledge of process/data mining Basic knowledge of LDAP Basic knowledge of AI technologies Fluent in speaking and writing in English Experience in Technical Support is a plus
Posted 6 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |