Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 7.0 years
6 - 10 Lacs
bengaluru
Work from Office
About the Position This is an opportunity for Engineering Managers to join our Data Platform organization that is passionate about scaling high volume, low-latency, distributed data-platform services & data products. In this role, you will get to work with engineers throughout the organization to build foundational infrastructure that allows Okta to scale for years to come. As the manager of the Data Foundations team in the Data Platform Group, your team will be responsible for designing, building, and deploying the foundational systems that power our data analytics and ML. Our analytics infrastructure stack sits on top of many modern technologies, including Kinesis, Flink, ElasticSearch, and Snowflake and are now looking to adopt GCP. We are seeking an Engineering Manager with a strong technical background and excellent communication skills to join us and partner with senior leadership as a thought leader in our strategic Data & ML projects. Our platform projects have a directive from engineering leadership to make OKTA a leader in the use of data and machine learning to improve end-user security and to expand that core-competency across the rest of engineering. You will have a sizable impact on the direction, design & implementation of the data solutions to these problems. What you will be doing: Recruit and mentor a globally distributed and talented group of diverse employees Collaborate with Product, Design, QA, Documentation, Customer Support, Program Management, TechOps, and other scrum teams. Engage in technical design and discussions and also help drive technical architecture Ensure the happiness and productivity of the team s software engineers Communicate the vision of our product to external entities Help mitigate risk (technical, product, personnel) Utilize professional acumen to improve Okta s technology, product, and engineering Participate in relevant Engineering workgroups and on-call rotations Foster, enable and promote innovation Define team metrics and meet productivity goals of the organization Cloud infrastructure cost tracking and management in partnership with Okta s FinOps team What you will bring to the role: A track record of leading or managing high performing platform teams (2 year experience minimum) Experience with end to end project delivery; building roadmaps through operational sustainability Strong facilitation skills (design, requirements gathering, progress and status sessions) Production experience with distributed systems running in AWS. GCP a bonus Passion about automation and leveraging agile software development methodologies Prior experience with Data Platform Prior experience in software development with hands-on experience as an IC using a cloud-based distributed computing technologies including Messaging systems such as Kinesis, Kafka Data processing systems like Flink, Spark, Beam Storage & Compute systems such as Snowflake, Hadoop Coordinators and schedulers like the ones in Kubernetes, Hadoop, Mesos Developing and tuning highly scalable distributed systems Experience with reliability engineering specifically in areas such as data quality, data observability and incident management And extra credit if you have experience in any of the following! Deep Data & ML experience Multi-cloud experience Federal cloud environments / Fedramp Contributed to the development of distributed systems or used one or more at high volume or criticality such as Kafka or Hadoop
Posted 4 days ago
5.0 - 10.0 years
7 - 12 Lacs
bengaluru
Hybrid
About the Team The Data Platform team is responsible for the foundational data services, systems, and data products for Okta that benefit our users. Today, the Data Platform team solves challenges and enables: Streaming analytics Interactive end-user reporting Data and ML platform for Okta to scale Telemetry of our products and data Our elite team is fast, creative and flexible. We encourage ownership. We expect great things from our engineers and reward them with stimulating new projects, new technologies and the chance to have significant equity in a company. Okta is about to change the cloud computing landscape forever. About the Position This is an opportunity for experienced Software Engineers to join our fast growing Data Platform organization that is passionate about scaling high volume, low-latency, distributed data-platform services & data products. In this role, you will get to work with engineers throughout the organization to build foundational infrastructure that allows Okta to scale for years to come. As a member of the Data Platform team, you will be responsible for designing, building, and deploying the systems that power our data analytics and ML. Our analytics infrastructure stack sits on top of many modern technologies, including Kinesis, Flink, ElasticSearch, and Snowflake. We are looking for experienced Software Engineers who can help design and own the building, deploying and optimizing the streaming infrastructure. This project has a directive from engineering leadership to make OKTA a leader in the use of data and machine learning to improve end-user security and to expand that core-competency across the rest of engineering. You will have a sizable impact on the direction, design & implementation of the solutions to these problems. Job Duties and Responsibilities: Design, implement and own data-intensive, high-performance, scalable platform components Work with engineering teams, architects and cross functional partners on the development of projects, design, and implementation Conduct and participate in design reviews, code reviews, analysis, and performance tuning Coach and mentor engineers to help scale up the engineering organization Debug production issues across services and multiple levels of the stack Required Knowledge, Skills, and Abilities: 5+ years of experience in object-oriented language, preferably Java Hands-on experience using a cloud-based distributed computing technologies including Messaging systems such as Kinesis, Kafka Data processing systems like Flink, Spark, Beam Storage & Compute systems such as Snowflake, Hadoop Coordinators and schedulers like the ones in Kubernetes, Hadoop, Mesos Experience in developing and tuning highly scalable distributed systems Excellent grasp of software engineering principles Solid understanding of multithreading, garbage collection and memory management Experience with reliability engineering specifically in areas such as data quality, data observability and incident management Nice to have Maintained security, encryption, identity management, or authentication infrastructure Leveraged major public cloud providers to build mission-critical, high volume services Hands-on experience in developing Data Integration applications for large scale (petabyte scale) environments with experience in both batch and online systems. Contributed to the development of distributed systems or used one or more at high volume or criticality such as Kafka or Hadoop Experience developing Kubernetes based services on AWS Stack.
Posted 4 days ago
8.0 - 10.0 years
0 Lacs
bengaluru, karnataka, india
Remote
As a global leader in cybersecurity, CrowdStrike protects the people, processes and technologies that drive modern organizations. Since 2011, our mission hasn't changed - we're here to stop breaches, and we've redefined modern security with the world's most advanced AI-native platform. We work on large scale distributed systems, processing almost 3 trillion events per day and this traffic is growing daily . Our customers span all industries, and they count on CrowdStrike to keep their businesses running, their communities safe and their lives moving forward. We're also a mission-driven company. We cultivate a culture that gives every CrowdStriker both the flexibility and autonomy to own their careers. We're always looking to add talented CrowdStrikers to the team who have limitless passion, a relentless focus on innovation and a fanatical commitment to our customers, our community and each other. Ready to join a mission that matters The future of cybersecurity starts with you. About The Role: As an engineer in this team, you will play an integral role as we build out our ML Platform & GenAI Studio from the ground up. Since the launch of ChatGPT, # Phishing attacks has increased by and hence the ML platform is a critical capability for Crowdstrike in its fight against bad actors. For this mission we are building a team in Bangalore. You will collaborate closely with Data Platform Software Engineers, Data Scientists & Threat Analysts to design, implement, and maintain scalable ML pipelines that will be used for Data Preparation, Cataloguing, Feature Engineering, Model Training, and Model Serving that influence critical business decisions. You'll be a key contributor in a production-focused culture that bridges the gap between model development and operational success. Future plans include generative AI investments for use cases such as modelling attack paths for IT assets. Location: Bangalore Candidates must be comfortable to visit office once a week What You'll Do: Help design, build and facilitate adoption of a modern ML platform including support for use cases like GenAI Understand current ML workflows, anticipate future needs and identify common patterns and exploit opportunities to templatize into repeatable components for model development, deployment, and monitoring Build a platform that scales to thousands of users and offers self-service capability to build ML experimentation, training and inference pipelines Leverage workflow orchestration tools to deploy efficient and scalable execution of complex data and ML pipelines Champion software development best practices around building distributed systems Leverage cloud services like Kubernetes, blob storage, and queues in our cloud first environment What You'll Need: B.S. /MS in Computer Science or a related field and 10+ years related experience or M.S. with 8+ years of experience 3+ years experience developing and deploying machine learning solutions to production. Familiarity with typical machine learning workflows from an engineering perspective (how they are built and used, not necessarily the theory) familiarity with supervised / unsupervised approaches: how, why, and when and labelled data is created and used 3+ years experience with ML Platform tools like Jupyter Notebooks, NVidia Workbench, MLFlow, Ray etc. Experience building data platform product(s) or features with (one of) Apache Spark, Flink or comparable tools Proficiency in distributed computing and orchestration technologies (Kubernetes, Airflow, etc.) Production experience with infrastructure-as-code tools such as Terraform, FluxCD Expert level experience with Python Java/Scala exposure is recommended. Ability to write Python interfaces to provide standardized and simplified interfaces for data scientists to utilize internal Crowdstrike tools Expert level experience with containerization frameworks Strong analytical and problem solving skills, capable of working in a dynamic environment Exceptional interpersonal and communication skills. Work with stakeholders across multiple teams and synthesize their needs into software interfaces and processes. Bonus Points: Critical Skills Needed for Role: Distributed Systems Knowledge Data/ML Platform Experience Experience with the Following is Desirable: Go Iceberg (highly desirable) Pinot or other time-series/OLAP-style database Jenkins Parquet Protocol Buffers/GRPC #LI-DP1 #LI-VJ1 Benefits of Working at CrowdStrike: Remote-friendly and flexible work culture Market leader in compensation and equity awards Comprehensive physical and mental wellness programs Competitive vacation and holidays for recharge Paid parental and adoption leaves Professional development opportunities for all employees regardless of level or role Employee Networks, geographic neighborhood groups, and volunteer opportunities to build connections Vibrant office culture with world class amenities Great Place to Work Certified across the globe CrowdStrike is proud to be an equal opportunity employer. We are committed to fostering a culture of belonging where everyone is valued for who they are and empowered to succeed. We support veterans and individuals with disabilities through our affirmative action program. CrowdStrike is committed to providing equal employment opportunity for all employees and applicants for employment. The Company does not discriminate in employment opportunities or practices on the basis of race, color, creed, ethnicity, religion, sex (including pregnancy or pregnancy-related medical conditions), sexual orientation, gender identity, marital or family status, veteran status, age, national origin, ancestry, physical disability (including HIV and AIDS), mental disability, medical condition, genetic information, membership or activity in a local human rights commission, status with regard to public assistance, or any other characteristic protected by law. We base all employment decisions--including recruitment, selection, training, compensation, benefits, discipline, promotions, transfers, lay-offs, return from lay-off, terminations and social/recreational programs--on valid job requirements. If you need assistance accessing or reviewing the information on this website or need help submitting an application for employment or requesting an accommodation, please contact us at for further assistance.
Posted 4 days ago
3.0 - 5.0 years
0 Lacs
india
On-site
DESCRIPTION Have you ever wondered how Amazon shipped your order so fast Wondered where it came from or how much it cost us To help describe some of our challenges, we created a short video about Supply Chain Optimization at Amazon - http://bit.ly/amazon-scot We are seeking a Data Engineer to join our team. Amazon has a culture of data-driven decision-making and demands business intelligence that is timely, accurate, and actionable. Your work will have an immediate influence on day-to-day decision making at Amazon.com. As an Amazon Data Engineer you will be working in one of the world's largest and most complex data warehouse environments. We maintain one of the largest data marts in Amazon as well as work on Business Intelligence reporting and dashboarding solutions that are used by thousands of users world-wide. Our team is responsible for timely delivery of mission critical analytical reports and metrics that are viewed at the highest levels in the organization. You should have deep expertise in the design, creation, management and business use of extremely large datasets. You should have excellent business and communication skills to be able to work with business owners to develop and define key business questions, and to build data sets that answer those questions. You should be expert at designing, implementing, and operating stable, scalable, low cost solutions to flow data from production systems into the data warehouse and into end-user facing applications. You should be able to work with business customers in a fast paced environment understanding the business requirements and implementing reporting solutions. Above all you should be passionate about working with huge data sets and someone who loves to bring datasets together to answer business questions and drive change. Key job responsibilities This role requires an Engineer with 4+ years experience in building data solutions, combined with both consulting and hands-on expertise. The position involves helping to build new and maintain existing data warehouse implementations, developing tools to facilitate data integration, identifying and architecting appropriate storage technologies, executing projects to deliver high-quality data pipelines on time, defining continuous improvement processes, driving technology direction, and effectively leading the data engineering team. You will work with multiple internal teams who need support in managing backend data solutions. Using your deep technical expertise, strong relationship-building skills, and documentation abilities, you will create technical content, provide consultation to customers, and gather feedback to drive the AWS analytic support offering. As the voice of the customer, you will work closely with data product managers and engineering teams to help design and deliver new features and product improvements that address critical customer challenges. A day in the life A typical day on our team involves collaborating with other engineers to deploy new data solutions through our large automated systems while providing operational support for your newly deployed software. You will seek out innovative approaches to automate fixes for operational issues, leverage AWS services to solve design problems, and engage with other internal teams to integrate your applications with theirs. You'll be part of a world-class team in an inclusive environment that maintains the entrepreneurial feel of a startup. This is an opportunity to operate and engineer systems on a massive scale and to gain top-notch experience in database storage technologies BASIC QUALIFICATIONS - 3+ years of data engineering experience - Experience with data modeling, warehousing and building ETL pipelines - 2+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience - Experience programming with at least one modern language such as C++, C#, Java, Python, Golang, PowerShell, Ruby - Knowledge of batch and streaming data architectures like Kafka, Kinesis, Flink, Storm, Beam - Knowledge of distributed systems as it pertains to data storage and computing PREFERRED QUALIFICATIONS - Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions - Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner.
Posted 4 days ago
4.0 - 6.0 years
0 Lacs
india
On-site
The Oracle Cloud Infrastructure (OCI) team can provide you the opportunity to build and operate a suite of massive scale, integrated cloud services in a broadly distributed, multi-tenant cloud environment. OCI is committed to providing the best in cloud products that meet the needs of our customers who are tackling some of the world's biggest challenges. We offer unique opportunities for smart, hands-on engineers with the expertise and passion to solve difficult problems in distributed highly available services and virtualised infrastructure. At every level, our engineers have a significant technical and business impact designing and building innovative new systems to power our customer's business critical applications. Oracle Cloud Infrastructure (OCI) Security Platform & Compliance products team help customers protect their business-critical cloud infrastructure and data. We build cloud native security & compliance solutions that provide customers with visibility into the security posture of their cloud assets and help automate remediation where possible. This role provides a fantastic opportunity to build an analytics solution and a data lake by sourcing and curating data from various internal + external providers. We leverage Kafka, Spark, Machine Learning, technologies running on OCI. You'll work with product managers, designers, and engineers to build data driven features. You must enjoy the excitement of agile development and interacting with other exceptional engineers. Desired Skills and Experience: 4+ years of hands-on large-scale cloud application software development 1+ years of experience in cloud infrastructure security and risk assessment 1+ years of hands-on experience with three of the following technologies: Kafka, Spark, AWS/OCI, Kubernetes, Rest APIs, Linux 1+ year of experience using and building highly available streaming data solutions like Flink or Spark Streaming 1+ years of experience building application on OCI, AWS, Azure or GCP cloud Experience with development methodology with short release cycles Excellent problem solving and communication skills with both technical and non-technical audiences Optional Skills: Working knowledge of SSL, authentication, encryption, audit logging & access policies. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans status or any other characteristic protected by law. Oracle is an Affirmative Action-Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability, protected veterans status, age, or any other characteristic protected by law. As a member of the software engineering division, you will take an active role in the definition and evolution of standard practices and procedures. You will be responsible for defining and developing software for tasks associated with the developing, designing and debugging of software applications or operating systems. Your day to day responsibilities will include: Develop highly available and scalable platform that aggregates and analyses large volume of data Design, deploy and manage large scale distributed systems and services built on OCI Develop test bed and tools to help avoid regressions Introduce observability and issue detection capabilities in the code Track down complex data and engineering issues, and analyse logs and data to solve problems Career Level - IC3
Posted 4 days ago
6.0 - 8.0 years
0 Lacs
bengaluru, karnataka, india
On-site
Title: Senior DevOps Engineer Reporting to: Senior Director, Product Development Location: Bengaluru (Bangalore) Opportunity: Responsibilities: Infrastructure Development & Integration Design, implement, and manage cloud-native infrastructure (AWS, Azure, GCP) to support healthcare platforms, AI agents, and clinical applications. Build and maintain scalable CI/CD pipelines to enable rapid and reliable delivery of software, data pipelines, and AI/ML models. Design and manage Kubernetes (K8s) clusters for container orchestration, workload scaling, and high availability with integrated monitoring to ensure cluster health and performance Implement Kubernetes-native tools (Helm, Kustomize, ArgoCD) for deployment automation and environment management ensuring observability through monitoring dashboards and alerts Collaborate with Staff Engineers/Architects to align infrastructure with enterprise goals for scalability, reliability, and performance leveraging monitoring insights to inform architectural decisions. System Optimization & Reliability Implement and maintain comprehensive monitoring, logging, and alerting mechanisms (Prometheus, Grafana, ELK, Datadog, AWS cloudwatch, AWS cloud trail) to ensure real-time visibility into system performance, resource utilization, and potential incidents. Implement monitoring, logging, and alerting mechanisms (Prometheus, Grafana, ELK, Datadog) to ensure system reliability and proactive incident response. Ensure data pipeline workflows (ETL/ELT, real-time streaming, batch processing) are observable, reliable, and auditable. Support observability and monitoring of GenAI pipelines, embeddings, vector databases, and agentic AI workflows. Proactively analyze monitoring data to identify bottlenecks, predict failures, and drive continuous improvement in system reliability. Compliance & Security Support audit trails and compliance reporting through automated DevSecOps practices. Implement security controls for LLM-based applications, AI agents, and healthcare data pipelines, including prompt injection prevention, API rate limiting, and data governance. Collaboration & Agile Practices Partner closely with software engineers, data engineers, AI/ML engineers, and product managers to deliver integrated, secure, and scalable solutions. Contribute to agile development processes including sprint planning, stand-ups, and retrospectives. Mentor junior engineers and share best practices in cloud-native infrastructure, CI/CD, Kubernetes, and automation. Innovation & Technical Expertise Stay informed about emerging DevOps practices, cloud-native architectures, MLOps/LLMOps, and data engineering tools. Prototype and evaluate new frameworks and tools to enhance infrastructure for data pipelines, GenAI, and Agentic AI applications. Advocate for best practices in infrastructure design, focusing on modularity, maintainability, and scalability. Requirements Education & Experience Bachelor&aposs or Master&aposs degree in Computer Science, Engineering, or related technical discipline. 6+ years of experience in DevOps, Site Reliability Engineering, or related roles, with at least 5+ years building cloud-native infrastructure. Proven track record of managing production-grade Kubernetes clusters and cloud infrastructure in regulated environments. Experience supporting GenAI/LLM applications (e.g., OpenAI, Hugging Face, LangChain) and vector databases (e.g., Pinecone, Weaviate, FAISS). Hands-on experience supporting data pipeline products using ETL/ELT frameworks (Apache Airflow, dbt, Prefect) and streaming systems (Kafka, Spark, Flink). Experience deploying AI agents and orchestrating agent workflows in production environments. Technical Proficiency Expertise in Kubernetes (K8s) for orchestration, scaling, and managing containerized applications. Strong proficiency in containerization (Docker) and Kubernetes ecosystem tools (Helm, ArgoCD, Istio/Linkerd for service mesh). Hands-on experience with Infrastructure as Code (Terraform, CloudFormation, or Pulumi). Proficiency with CI/CD tools (Jenkins, GitHub Actions, GitLab CI, ArgoCD, Spinnaker). Familiarity with monitoring and observability tools (Prometheus, Grafana, ELK, Datadog, AWS cloud watch and AWS cloud trail), including setting up dashboards, alerts, and custom metrics for cloud-native and AI systems. Good to have: knowledge of healthcare data standards (FHIR, HL7) and secure deployment practices for AI/ML and data pipelines. Professional Skills Strong problem-solving skills with a focus on reliability, scalability, and security. Excellent collaboration and communication skills across cross-functional teams. Proactive, detail-oriented, and committed to technical excellence in a fast-paced healthcare environment. About Get Well: Now part of the SAI Group family, Get Well is redefining digital patient engagement by putting patients in control of their personalized healthcare journeys, both inside and outside the hospital. Get Well is combining high-tech AI navigation with high-touch care experiences driving patient activation, loyalty, and outcomes while reducing the cost of care. For almost 25 years, Get Well has served more than 10 million patients per year across over 1,000 hospitals and clinical partner sites, working to use longitudinal data analytics to better serve patients and clinicians. AI innovator SAI Group led by Chairman Romesh Wadhwani is the lead growth investor in Get Well. Get Well&aposs award-winning solutions were recognized again in 2024 by KLAS Research and AVIA Marketplace. Learn more at Get Well and follow-us on LinkedIn?and Twitter. Get Well is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age or veteran status. About SAI Group: SAIGroup commits to $1 Billion capital, an advanced AI platform that currently processes 300M+ patients, and 4000+ global employee base to solve enterprise AI and high priority healthcare problems. SAIGroup - Growing companies with advanced AI; https://www.cnbc.com/2023/12/08/75-year-old-tech-mogul-betting-1-billion-of-his-fortune-on-ai-future.html Bio of our Chairman Dr. Romesh Wadhwani: Team - SAIGroup (Informal at Romesh Wadhwani - Wikipedia) TIME Magazine recently recognized Chairman Romesh Wadhwani as one of the Top 100 AI leaders in the world - Romesh and Sunil Wadhwani: The 100 Most Influential People in AI 2023 | TIME Show more Show less
Posted 4 days ago
5.0 - 7.0 years
0 Lacs
bengaluru, karnataka, india
On-site
Title: DevOps Engineer Reporting to: Senior Director, Product Development Location: Bengaluru (Bangalore) Opportunity: Responsibilities: Infrastructure Development & Integration Design, implement, and manage cloud-native infrastructure (AWS, Azure, GCP) to support healthcare platforms, AI agents, and clinical applications. Build and maintain scalable CI/CD pipelines to enable rapid and reliable delivery of software, data pipelines, and AI/ML models. Design and manage Kubernetes (K8s) clusters for container orchestration, workload scaling, and high availability with integrated monitoring to ensure cluster health and performance Implement Kubernetes-native tools (Helm, Kustomize, ArgoCD) for deployment automation and environment management ensuring observability through monitoring dashboards and alerts Collaborate with Staff Engineers/Architects to align infrastructure with enterprise goals for scalability, reliability, and performance leveraging monitoring insights to inform architectural decisions. System Optimization & Reliability Implement and maintain comprehensive monitoring, logging, and alerting mechanisms (Prometheus, Grafana, ELK, Datadog, AWS cloudwatch, AWS cloud trail) to ensure real-time visibility into system performance, resource utilization, and potential incidents. Implement monitoring, logging, and alerting mechanisms (Prometheus, Grafana, ELK, Datadog) to ensure system reliability and proactive incident response. Ensure data pipeline workflows (ETL/ELT, real-time streaming, batch processing) are observable, reliable, and auditable. Support observability and monitoring of GenAI pipelines, embeddings, vector databases, and agentic AI workflows. Proactively analyze monitoring data to identify bottlenecks, predict failures, and drive continuous improvement in system reliability. Compliance & Security Support audit trails and compliance reporting through automated DevSecOps practices. Implement security controls for LLM-based applications, AI agents, and healthcare data pipelines, including prompt injection prevention, API rate limiting, and data governance. Collaboration & Agile Practices Partner closely with software engineers, data engineers, AI/ML engineers, and product managers to deliver integrated, secure, and scalable solutions. Contribute to agile development processes including sprint planning, stand-ups, and retrospectives. Mentor junior engineers and share best practices in cloud-native infrastructure, CI/CD, Kubernetes, and automation. Innovation & Technical Expertise Stay informed about emerging DevOps practices, cloud-native architectures, MLOps/LLMOps, and data engineering tools. Prototype and evaluate new frameworks and tools to enhance infrastructure for data pipelines, GenAI, and Agentic AI applications. Advocate for best practices in infrastructure design, focusing on modularity, maintainability, and scalability. Requirements Education & Experience Bachelor&aposs or Master&aposs degree in Computer Science, Engineering, or related technical discipline. 5+ years of experience in DevOps, Site Reliability Engineering, or related roles, with at least 3+ years building cloud-native infrastructure. Proven track record of managing production-grade Kubernetes clusters and cloud infrastructure in regulated environments. Experience supporting GenAI/LLM applications (e.g., OpenAI, Hugging Face, LangChain) and vector databases (e.g., Pinecone, Weaviate, FAISS). Hands-on experience supporting data pipeline products using ETL/ELT frameworks (Apache Airflow, dbt, Prefect) and streaming systems (Kafka, Spark, Flink). Experience deploying AI agents and orchestrating agent workflows in production environments. Technical Proficiency Expertise in Kubernetes (K8s) for orchestration, scaling, and managing containerized applications. Strong proficiency in containerization (Docker) and Kubernetes ecosystem tools (Helm, ArgoCD, Istio/Linkerd for service mesh). Hands-on experience with Infrastructure as Code (Terraform, CloudFormation, or Pulumi). Proficiency with CI/CD tools (Jenkins, GitHub Actions, GitLab CI, ArgoCD, Spinnaker). Familiarity with monitoring and observability tools (Prometheus, Grafana, ELK, Datadog, AWS cloud watch and AWS cloud trail), including setting up dashboards, alerts, and custom metrics for cloud-native and AI systems. Good to have: knowledge of healthcare data standards (FHIR, HL7) and secure deployment practices for AI/ML and data pipelines. Professional Skills Strong problem-solving skills with a focus on reliability, scalability, and security. Excellent collaboration and communication skills across cross-functional teams. Proactive, detail-oriented, and committed to technical excellence in a fast-paced healthcare environment. About Get Well: Now part of the SAI Group family, Get Well is redefining digital patient engagement by putting patients in control of their personalized healthcare journeys, both inside and outside the hospital. Get Well is combining high-tech AI navigation with high-touch care experiences driving patient activation, loyalty, and outcomes while reducing the cost of care. For almost 25 years, Get Well has served more than 10 million patients per year across over 1,000 hospitals and clinical partner sites, working to use longitudinal data analytics to better serve patients and clinicians. AI innovator SAI Group led by Chairman Romesh Wadhwani is the lead growth investor in Get Well. Get Well&aposs award-winning solutions were recognized again in 2024 by KLAS Research and AVIA Marketplace. Learn more at Get Well and follow-us on LinkedIn?and Twitter. Get Well is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age or veteran status. About SAI Group: SAIGroup commits to $1 Billion capital, an advanced AI platform that currently processes 300M+ patients, and 4000+ global employee base to solve enterprise AI and high priority healthcare problems. SAIGroup - Growing companies with advanced AI; https://www.cnbc.com/2023/12/08/75-year-old-tech-mogul-betting-1-billion-of-his-fortune-on-ai-future.html Bio of our Chairman Dr. Romesh Wadhwani: Team - SAIGroup (Informal at Romesh Wadhwani - Wikipedia) TIME Magazine recently recognized Chairman Romesh Wadhwani as one of the Top 100 AI leaders in the world - Romesh and Sunil Wadhwani: The 100 Most Influential People in AI 2023 | TIME Show more Show less
Posted 4 days ago
4.0 - 6.0 years
0 Lacs
mumbai, maharashtra, india
On-site
What's the role You will be working on all phases of the software development lifecycle from design, coding, testing, deployment & post-production support. Your responsibilities span the entire software development process to include: Develop full stack applications/services/tools to process very large data sets in Java/scala, Spark/Flink/Akka in a cloud computing environment mainly with AWS. Participate architecture design and do application development at various levels with best practices. Be responsible for technical and in-depth solutions in an innovative and fast-paced environment. Drive and contribute the high-quality development with availability, scalability, reliability, high performance and cost efficiency. Build out systems to monitor deployed workflows and handling failures. Identify, investigae, analyze and troubleshoot software defects. Engage with internal and external customers for requirements, change requests and incidents to help define clear application-level specifications Be part of an agile team, mentor junior engineers, share knowledge, perform design/code reviews, pro-active communication. Who are you BS/MS in Computer Science or related field. 4+ years of programming experience in Java & AWS. Deep understanding of Data Structures, Algorithms, OOP design concepts, etc. Good understanding on functional programming and modern design patterns Having considerable understanding and experience on SOA, MDA, and other forms of enterprise architecture paradigms Having working knowledge of Scala, Spark/Flink/Akka, big data processing, batch vs real time Experience in Maven/SBT, GIT/GERRIT, Gib Lab, Jenkins, Linux Environment. Strong analytical, creative problem-solving and communication skills. Full stack architecture design & application development experience HERE is an equal opportunity employer. We evaluate qualified applicants without regard to race, color, age, gender identity, sexual orientation, marital status, parental status, religion, sex, national origin, disability, veteran status, and other legally protected characteristics. Who are we HERE Technologies is a location data and technology platform company. We empower our customers to achieve better outcomes - from helping a city manage its infrastructure or a business optimize its assets to guiding drivers to their destination safely. At HERE we take it upon ourselves to be the change we wish to see. We create solutions that fuel innovation, provide opportunity and foster inclusion to improve people's lives. If you are inspired by an open world and driven to create positive change, join us. Learn more about us on our YouTube Channel. In this position you will be joining HERE's traffic business unit. HERE traffic team is responsible for developing first-class traffic related products and real-time services which are part of HERE offered products and services portfolio. You will be part of the energetic and talented traffic teams that work on challenging tasks and build parallel. distributed processing systems in big data world. In addition to the technical challenges in this position, you will have lots of opportunities to expand your career both technically and personally. Responsibilities:
Posted 5 days ago
3.0 - 5.0 years
0 Lacs
bengaluru, karnataka, india
Remote
About Chargebee: Chargebee is a subscription billing and revenue management platform powering some of the fastest-growing brands around the world today, including Calendly, Hopin, Pret-a-Manger, Freshworks, Okta, Study.com and others. Thousands of SaaS and subscription-first businesses process over billions of dollars in revenue every year through the Chargebee platform. Headquartered in San Francisco, USA, our 500+ team members work remotely throughout the world, including India, the Netherlands, Paris, Spain, Australia, and the USA. Chargebee has raised over $480 million in capital and is funded by Accel, Tiger Global, Insight Partners, Steadview Capital, and Sapphire Ventures. And were on a mission to push the boundaries of subscription revenue operations. Not just ours, but every customer and prospective business on a recurring revenue model. Our team builds high-quality and innovative software to enable our customers to grow their revenues powered by the state-of-the-art subscription management platform. Key Roles & Responsibilities Productionise ML workflows : build and maintain data pipelines for feature generation, ML model training, batch scoring, and real?time inference using modern orchestration and container frameworks. Own model serving infrastructure : implement fast, reliable APIs / batch jobs; manage autoscaling, versioning, and rollback strategies Feature?store development : design and operate feature stores and corresponding data pipelines to guarantee trainingserving consistency. CI/CD & DevEx : automate testing, deployment, and monitoring of data and model artefacts; provide templated repos and documentation that let data scientists move from notebook to prod quickly. Observability & quality : instrument data?drift, concept?drift, and performance metrics; set up alerting dashboards to ensure model health. Collaboration & review : work closely with data scientists on model experimentation, production?harden their code, review PRs, and evangelise MLOps best practices across the organisation. Required Skills & Experience 3+ years as a ML / Data Engineer working on large-scale, data-intensive systems in cloud environments (AWS, GCP, or Azure), with proven experience partnering closely with ML teams to deploy models at scale. Proficient in Python plus one of Go / Java / Scala; strong software?engineering fundamentals (testing, design patterns, code review). Hands on experience in Spark and familiarity with streaming frameworks (Kafka, Flink, Spark Structured Streaming) Hands-on experience with workflow orchestrators (Airflow, Dagster, Kubeflow Pipelines, etc.) and container platforms (Docker + Kubernetes/EKS/ECS). Practical knowledge of ML algorithms like XGBoost, LightGBM, transformers and deep learning frameworks like pytorch is preferred Experience with experiment?tracking / ML model?management tools (MLflow, SageMaker, Vertex AI, Weights & Biases) is a plus Benefits: Want to know what it means to work for a company that genuinely cares about you Check out just a few of the benefits we give our employees: We are Globally Local With a diverse team across four continents, and customers in over 60 countries, you get to work closely with a global perspective right from your own neighborhood. We value Curiosity We believe the next great idea might just be around the corner. Perhaps its that random thought you had ten minutes ago. We believe in creating an ecosystem that fosters a desire to seek out hard questions, and then figure out answers to them. Customer! Customer! Customer! Everything we do is driven towards enabling our customers growth. This means no matter what you do, you will always be adding real value to a real business problem. Its a lot of responsibility, but also a lot of fun. If you resonate with Chargebee, have a monstrous appetite for curiosity, and an insatiable urge to learn and build new things, were waiting for you! We value people from all backgrounds and are dedicated to hiring and employing a diverse and inclusive workplace. Come be a part of the Chargebee tribe! Show more Show less
Posted 6 days ago
8.0 - 18.0 years
3 - 14 Lacs
bengaluru, karnataka, india
On-site
Must have skills required : Scala, Java, Golang, Python, Hive, Apache Spark, Flink, Apache Kafka Good to have skills : Kubernetes, AWS, Graph Database/Neo4j Netskope (One of Uplers Clients) is Looking for: Senior Staff Software Engineer, Data Platform who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's in it for you You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What you will be doing Designing and implementing planet scale distributed data platforms, services and frameworks including solutions to address high-volume and complex data collections, processing, transformations and analytical reporting Working with the application development team to implement data strategies, build data flows and develop conceptual data models Understanding and translating business requirements into data models supporting long-term solutions Analyzing data system integration challenges and proposing optimized solutions Researching to identify effective data designs, new tools and methodologies for data analysis Providing guidance and expertise to the development community in effective implementation of data models and building high throughput data access services Providing technical leadership in all phases of a project from discovery and planning through implementation and delivery Required skills and experience 15+ years of hands-on experience in architecture, design or development of enterprise data solutions, applications, and integrations Ability to conceptualize and articulate ideas clearly and concisely Excellent algorithms, data structure, and coding skills with either Java, Python or Scala programming experience Proficiency in SQL Experience building products using one from each of the following distributed technologies: Relational Stores (i.e. Postgres, MySQL or Oracle) Columnar or NoSQL Stores (i.e. Big Query, Clickhouse, or Redis) Distributed Processing Engines (i.e. Apache Spark, Apache Flink, or Celery) Distributed Queues (i.e. Apache Kafka, AWS Kinesis or GCP PubSub) Experience with software engineering standard methodologies (e.g. unit testing, code reviews, design document) Experience working with GCP, Azure, AWS or similar cloud platform technologies a plus Excellent written and verbal communication skills Bonus points for contributions to the open source community Education BE/ B Tech or equivalent required, ME / M Tech or equivalent strongly preferred
Posted 6 days ago
8.0 - 10.0 years
0 Lacs
gurgaon, haryana, india
On-site
Expedia Group brands power global travel for everyone, everywhere. We design cutting-edge tech to make travel smoother and more memorable, and we create groundbreaking solutions for our partners. Our diverse, vibrant, and welcoming community is essential in driving our success. Why Join Us To shape the future of travel, people must come first. Guided by our Values and Leadership Agreements, we foster an open culture where everyone belongs, differences are celebrated and know that when one of us wins, we all win. We provide a full benefits package, including exciting travel perks, generous time-off, parental leave, a flexible work model (with some pretty cool offices), and career development resources, all to fuel our employees' passion for travel and ensure a rewarding career journey. Were building a more open world. Join us. Introduction to team Our Corporate Functions are made up of teams that support Expedia Group, including Employee Communications, Finance, Traveler and Partner Service Platform, Legal, People Team, Inclusion and Diversity, and Global Social Impact and Sustainability The Global Financial Technology organization is a strategic partner for all finance functions and a service delivery organization that ensures finance initiatives/dependencies are completed on time and within budget. As a part of the Financial Planning and Reporting team you will get to work on engineering financial data that forms the backbone of our core FP&A processes for Expedia Group. In This Role, You Will Lead a team of Data Engineers to design and build robust, scalable data solutions, datasets, and data platforms. Translate business requirements into efficient, well-supported data solutions that seamlessly integrate with the overall system architecture, enabling customers and stakeholders to effectively address customer needs through data-driven insights. Participate in the full development cycle, end-to-end, from design, implementation, and testing to documentation, delivery and maintenance. Produce comprehensive, user-friendly architectures for Data Warehouse solutions that seamlessly integrate with the organization&aposs broader data ecosystem. Design, create, manage, and utilize large-scale data sets. Accountable for leading the design and delivery of scalable ETL (Extract, Transform, Load) processes within the data lake platform. Own Roadmap development, partner with product managers on roadmap, capacity planning, and feature rollout. Drive Business alignment, tie technical work to financial outcomes (cost savings, efficiency, accuracy). Technical Expertise: Strong in multiple programming languages and data technologies; responsible for architectural decisions, system design, and domain ownership. Operational Excellence: Advocates for and implements best practices in testing, monitoring, alerting, data validation, and performance optimization. Strategic & Business Alignment: Partners with product and business teams to align technology initiatives with business outcomes, including cost optimization and roadmap planning. Cross-Team Collaboration & Influence: Works across teams and senior leadership to drive process improvements, share domain knowledge, and influence technical direction. Mentorship & Talent Development: Develops team culture, supports professional growth, and builds a diverse talent pipeline through coaching and structured feedback. Domain Knowledge & Innovation: Applies deep industry knowledge to improve systems, recommend frameworks, and stay ahead of trends in data engineering and cloud technologies. Define team goals and align them with business outcomes. Act as a bridge between technical and non-technical stakeholders, ensuring clarity. Drive continuous improvement by anticipating bottlenecks and removing blockers. Experience And Qualifications 8+ years (Bachelors) or 6+ years (Masters) in data engineering. Strong in multiple technologies, data platforms, and cloud services. Proven experience in mentoring, project leadership, and team enablement. Skilled in data modeling, streaming, validation, and performance tuning. Excellent communication, documentation, and stakeholder management. Experience managing distributed teams and large-scale projects. Passionate about building diverse, high-performing teams and culture. Experience with resource allocation, capacity planning, and balancing FTE vs. contingent staff. Skilled at evaluating and addressing team skill gaps. Leadership & People Management: 3+ years managing teams of 6-10+ data engineers, including hiring, performance management, and mentoring across multiple global locations. Project & Program Delivery: Led 3+ multi-quarter data engineering projects, overseeing execution, cross-functional collaboration, and alignment with product roadmaps. Technology: Must have experience with Apache Spark, Java/Scala, Hive/RDBMS, Code deployments on AWS/Kubernetes, Git version control, SQL. Good to have experience with Airflow, CI/CD, Apache Iceberg, Kubernetes/Docker, Apache Kafka/Flink Accommodation requests If you need assistance with any part of the application or recruiting process due to a disability, or other physical or mental health conditions, please reach out to our Recruiting Accommodations Team through the Accommodation Request. We are proud to be named as a Best Place to Work on Glassdoor in 2024 and be recognized for award-winning culture by organizations like Forbes, TIME, Disability:IN, and others. Expedia Group&aposs family of brands includes: Brand Expedia, Hotels.com, Expedia Partner Solutions, Vrbo, trivago, Orbitz, Travelocity, Hotwire, Wotif, ebookers, CheapTickets, Expedia Group Media Solutions, Expedia Local Expert, CarRentals.com, and Expedia Cruises. 2024 Expedia, Inc. All rights reserved. Trademarks and logos are the property of their respective owners. CST: 2029030-50 Employment opportunities and job offers at Expedia Group will always come from Expedia Groups Talent Acquisition and hiring teams. Never provide sensitive, personal information to someone unless youre confident who the recipient is. Expedia Group does not extend job offers via email or any other messaging tools to individuals with whom we have not made prior contact. Our email domain is @expediagroup.com. The official website to find and apply for job openings at Expedia Group is careers.expediagroup.com/jobs. Expedia is committed to creating an inclusive work environment with a diverse workforce. All qualified applicants will receive consideration for employment without regard to race, religion, gender, sexual orientation, national origin, disability or age. Show more Show less
Posted 1 week ago
3.0 - 6.0 years
0 Lacs
bengaluru, karnataka, india
Remote
About Sibros Technologies Who We Are Sibros is accelerating the future of SDV excellence with its Deep Connected Platform that orchestrates full vehicle software update management, vehicle analytics, and remote commands in one integrated system. Adaptable to any vehicle architecture, Sibros platform meets stringent safety, security, and compliance standards, propelling OEMs to innovate new connected vehicle use cases across fleet management, predictive maintenance, data monetization, and beyond. Learn more at www.sibros.tech. Our Mission Our mission is to help our customers get the most value out of their connected devices. Follow us on LinkedIn | Youtube | Instagram About The Role Job Title: Software Engineer II Experience: 3 - 6 At Sibros, we are building the foundational data infrastructure that powers the software-defined future of mobility. One of our most impactful products Deep Logger enables rich, scalable, and intelligent data collection from connected vehicles , unlocking insights that were previously inaccessible. Our platform ingests high-frequency telemetry , diagnostic signals, user behavior, and system health data from vehicles across the globe. We transform this into actionable intelligence through real-time analytics , geofence-driven alerting , and predictive modeling for use cases like trip intelligence, fault detection, battery health , and driver safety . Were looking for a Software Engineer II , to help scale the backend systems that support Deep Loggers data pipelinefrom ingestion and streaming analytics to long-term storage and ML model integration . Youll play a key role in designing high-throughput, low-latency systems that operate reliably in production, even as data volumes scale to billions of events per day. What Youll Do Understand the limitations of various cloud native/open sourced, streaming/data related solutions, and leverage them to build data driven applications such as trip processing, geofence alerting, vehicle components health prediction, etc. Develop and optimize streaming data pipelines using Apache Beam, Flink, and Google Cloud Dataflow. Collaborate closely with firmware engineers, frontend engineers and product owners to build highly scalable solutions that provide fleet insights. Wear multiple hats in a fast-paced startup environment, adapting to new challenges and responsibilities. Understand customer requirements and convert them into engineering ideas to build innovative real-time data applications. What You Should Know Over 3+ years of experience in software engineering. Excellent understanding of computer science fundamentals, data structures, and algorithms. Strong track record in designing and implementing large scale distributed systems. Willingness to wear multiple hats and adapt to a fast-paced startup environment. Proficiency in writing production-grade code in GoLang or Java. Hands-on experience with Kubernetes, Lambda, and cloud-native services, preferably in Google Cloud or AWS environments. Experience in Apache Beam/Flink for building and deploying large-scale data processing applications. Passionate about the vision and mission of the company, and interested in solving challenging problems in the automotive IoT domain. Preferred Qualifications Experience designing and building systems for large-scale IoT deployments, including data collection, processing, and analysis. Experience with streaming and batch processing models using open-source tools such as Apache Kafka, Flink, and Beam. Expertise in building cloud-native solutions using Google Cloud, AWS, or Azure. Experience in working with large scale time-series databases such as Apache Druid, Clickhouse. Experience in working with streaming processing open source tools such as Apache Kafka, Flink. What We Offer Competitive compensation package with performance incentives. A dynamic work environment with a flat hierarchy and the opportunity for rapid career advancement. Collaborate with a dynamic team thats passionate about solving complex problems in the automotive IoT space. Access to continuous learning and development opportunities. Flexible working hours to accommodate different time zones. Comprehensive benefits package including health insurance and wellness programs. A culture that values innovation and promotes a work-life balance. Equal Opportunity Employer We are an equal opportunity employer and value diversity at our company. We do not discriminate based on race, religion, color, national origin, gender, sexual orientation, age, marital status, or disability status. This job description provides a comprehensive outline designed to attract individuals experienced in inside sales and looking to advance their career in a dynamic and growing industry. Adjustments can be made based on the specific needs of your team and changes in business strategies. Show more Show less
Posted 1 week ago
6.0 - 8.0 years
0 Lacs
india
On-site
Job Description As aSenior Software Engineer in our team, you work with large scale manufacturing data coming from our globally distributed plants. You will focus on building efficient, scalable & data-driven applications that - among other use cases - connect IoT devices, pre-process, standardize or enrich data, feed ML models or generate alerts for shopfloor operators. The data sets produced by these applications - whether data streams or data at rest - need to be highly available, reliable, consistent and quality-assured so that they can serve as input towide range of other use cases and downstream applications. We run these applications on a hybrid data platform - Azure Databricks and a Kubernetes based, edge data platform in our plants. The platform is currently in ramp-up phase, so apart from building applications, you will also contribute to scaling the platform including topics such as automation and observability. Finally, you are expected to interact with customers and other technical teams e.g. for requirements clarification & definition of data models. Qualifications Bachelor's degree in computer science, Computer Engineering, relevant technical field, or equivalent Master's degree preferred. Additional Information Skills 6+ years of experience in professional software engineering , with a significantportion focused on building backend and / or data-intensive applications Proficiency in Scala or another JVM-based language (and the willingness to pick up Scala quickly) Deep level of understanding in distributed systems for data storage and processing (e.g. Kafka ecosystem, Spark, Flink, HDFS, S3) - experience with Azure Databricks is a plus Prior experience with stream processing libraries such as Kafka Streams, fs2,zio-streams or Akka/Pekko streams is a plus Hands-on experience with Docker and Kubernetesfor application deployment, scaling, and management. Excellent software engineering skills (i.e., data structures & algorithms, software design) and robustknowledge of object-oriented & functional programming principles Experience with CI/CD tools such as Jenkins orGithubActions Experience with RDBMS (e.g. Postgres) Excellent software engineering skills (i.e., data structures & algorithms, software design) Excellent problem-solving skills and a pragmatic approach to engineering. Strong communication and collaboration skills, with the ability to articulate complex technical concepts to diverse audiences.
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
chennai, tamil nadu, india
On-site
Job Position: Java Developer with Kafka Job Type: Full-Time Location: Pune/Chennai/Kolkata, IN Experience: - 8+ to 12+ Years Key Responsibilities: Key Responsibilities Design and develop backend services using J ava, Spring Boot, and microservices architecture . Implement real-time event streaming using Apache Kafka . Build, optimize, and maintain Kafka producers, consumers, and streaming pipelines . Ensure system scalability, reliability, and performance . Collaborate with frontend, DevOps, and data engineering teams for end-to-end solutions. Develop and consume RESTful APIs for system integrations. Perform unit testing, code reviews, and CI/CD pipeline integration . Monitor, troubleshoot, and fine-tune Kafka clusters and backend services . Strong knowledge of Java 8+/Spring Boot for backend development. Hands-on experience with Apache Kafka (topics, partitions, consumer groups, schema registry, Kafka Streams/Connect) . Expertise in building and scaling microservices . Good understanding of multithreading, concurrency, and performance tuning . Experience in working with REST APIs and messaging systems. Proficiency with Git, CI/CD pipelines (Jenkins/GitLab CI) . Experience with databases (SQL & NoSQL) like MySQL, PostgreSQL, Cassandra, MongoDB. Exposure to cloud platforms (AWS/GCP/Azure) and containerization ( Docker, Kubernetes ). Knowledge of monitoring and logging tools (Prometheus, Grafana, ELK stack). Familiarity with Big Data frameworks (Spark, Flink) . Show more Show less
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
india
On-site
Job Position: Java Developer with Kafka Job Type: Full-Time Location: Pune/Chennai/Kolkata, IN Experience: - 8+ to 12+ Years Key Responsibilities: Key Responsibilities Design and develop backend services using J ava, Spring Boot, and microservices architecture . Implement real-time event streaming using Apache Kafka . Build, optimize, and maintain Kafka producers, consumers, and streaming pipelines . Ensure system scalability, reliability, and performance . Collaborate with frontend, DevOps, and data engineering teams for end-to-end solutions. Develop and consume RESTful APIs for system integrations. Perform unit testing, code reviews, and CI/CD pipeline integration . Monitor, troubleshoot, and fine-tune Kafka clusters and backend services . Strong knowledge of Java 8+/Spring Boot for backend development. Hands-on experience with Apache Kafka (topics, partitions, consumer groups, schema registry, Kafka Streams/Connect) . Expertise in building and scaling microservices . Good understanding of multithreading, concurrency, and performance tuning . Experience in working with REST APIs and messaging systems. Proficiency with Git, CI/CD pipelines (Jenkins/GitLab CI) . Experience with databases (SQL & NoSQL) like MySQL, PostgreSQL, Cassandra, MongoDB. Exposure to cloud platforms (AWS/GCP/Azure) and containerization ( Docker, Kubernetes ). Knowledge of monitoring and logging tools (Prometheus, Grafana, ELK stack). Familiarity with Big Data frameworks (Spark, Flink) . Show more Show less
Posted 1 week ago
5.0 - 8.0 years
2 - 7 Lacs
hyderabad, chennai, bengaluru
Work from Office
Job Title: Developer Experience: 4-6 Years Work Location: Chennai, TN || Bangalore, KA || Hyderabad, TS Skill Required: Digital : Bigdata and Hadoop Ecosystems Digital : PySpark Job Description: "? Need to work as a developer in Bigdata, Hadoop or Data Warehousing Tools and Cloud Computing ? Work on Hadoop, Hive SQL?s, Spark, Bigdata Eco System Tools.? Experience in working with teams in a complex organization involving multiple reporting lines.? The candidate should have strong functional and technical knowledge to deliver what is required and he/she should be well acquainted with Banking terminologies. ? The candidate should have strong DevOps and Agile Development Framework knowledge.? Create Scala/Spark jobs for data transformation and aggregation? Experience with stream-processing systems like Storm, Spark-Streaming, Flink" Essential Skills: "? Working experience of Hadoop, Hive SQL? s, Spark, Bigdata Eco System Tools.? Should be able to tweak queries and work on performance enhancement. ? The candidate will be responsible for delivering code, setting up environment, connectivity, deploying the code in production after testing. ? The candidate should have strong functional and technical knowledge to deliver what is required and he/she should be well acquainted with Banking terminologies. Occasionally, the candidate may have to be responsible as a primary contact and/or driver for small to medium size projects. ? The candidate should have strong DevOps and Agile Development Framework knowledge ? Preferable to have good technical knowledge on Cloud computing, AWS or Azure Cloud Services.? Strong conceptual and creative problem-solving skills, ability to work with considerable ambiguity, ability to learn new and complex concepts quickly.? Experience in working with teams in a complex organization involving multiple reporting lines? Solid understanding of object-oriented programming and HDFS concepts" Role & responsibilities Preferred candidate profile
Posted 1 week ago
3.0 - 5.0 years
0 Lacs
bengaluru, karnataka, india
On-site
Zeta is aNext-Gen Banking Techcompany that empowers banks and fintechs to launch banking products for the future. It was founded by and Ramki Gaddipati in 2015. Our flagship processing platform - Zeta Tachyon - is the industry's first modern, cloud-native, and fully API-enabled stack that brings together issuance, processing, lending, core banking, fraud & risk, and many more capabilities as a single-vendor stack. 20M+ cards have been issued on our platform globally. Zeta is actively working with the largest Banks and Fintechs in multiple global markets transforming customer experience for multi-million card portfolios. Zeta has over1700+employees - with over70%roles in R&D - across locations in theUS,EMEA, andAsia. We raised$280 millionat a$1.5 billionvaluation from Softbank, Mastercard, and other investors in 2021. Learn more @ , , , About the Role As a Data Engineer II, you will play a crucial role in developing, optimizing, and managing several large data lakes and data warehouses, comprising data from multiple disparate sources. Responsibilities: Data Pipeline Operations - Design, build, and maintain robust and scalable data pipelines to ingest, transform, and deliver structured and unstructured data from multiple sources. Ensure high-quality data by implementing monitoring, validation, and error-handling processes. Platform Engineering & Optimization - Create and update data models to represent the structure of the data. Design, implement, and maintain database systems. Optimize database performance and ensure data integrity. Troubleshoot and resolve database issues. Build and manage data warehouses for storage and analysis of large datasets. Collaborate on data modeling, schema design, and performance optimization for large-scale datasets. Data Quality and Governance: Implement and enforce data quality standards. Contribute to data governance processes and policies. Scripting and Programming: Develop and automate data processes through programming languages (e.g., Python, Java, SQL). Implement data validation scripts and error handling mechanisms. Version Control: Use version control systems (e.g., Git) to manage codebase changes for data pipelines. Monitoring and Optimization: Implement monitoring solutions to track the performance and health of data systems. Optimize data processes for efficiency and scalability. Cloud Platforms: Work with cloud platforms (e.g., AWS, Azure, GCP) to deploy and manage data infrastructure. Utilize cloud-based services for data storage, processing, and analytics. Security: Implement and adhere to data security best practices. Ensure compliance with data protection regulations.. Troubleshooting and Support: Provide support for data-related issues and participate in root cause analysis. Skills: Expertise in data modeling, database design, and data warehousing. Proficient in SQL and programming languages such as Python, Java, or Scala. Experience with big data technologies such as Hadoop, Spark, and Kafka. Cloud-native architecture expertise (AWS, GCP, or Azure), including containerization (Docker, Kubernetes) and infrastructure-as-code (Terraform, CloudFormation). Experience and Qualifications: Bachelor's/Master's degree in engineering (computer science, information systems) with 3-5 years of experience in data engineering, BI engineering, and data warehouse development. Excellent command on SQL and one or more programming languages, preferably Python or Java. Knowledge of Flink, Airflow, Apache Spark, DBT, Athena / Presto Experience working with Kubernetes.
Posted 2 weeks ago
3.0 - 4.0 years
10 - 15 Lacs
hyderabad, chennai, bengaluru
Work from Office
DRole Summary We are seeking a proactive and detail-oriented Production Operations Engineer to join our Data Platforms team as an Alternative Workforce (AWF) member based in the India Development Center (IDC). This role plays a critical part in maintaining the stability and reliability of our large-scale "Experimentation & Feature Flag Management" platform by previding hands-on operational support, system monitoring, and first-line incident triage. The ideal candidate is comfortable working in containerized environments, has solid Linux fundamentals, and demonstrates strong discipline in executing standard operating procedures and documenting outcomes. This position requires close collaboration with Shanghai and San Jose based developers and product teams in a cross-timezone environment Key Responsibilities Monitor production web portal and core service; respond promptly to alerts and anomalies Troubleshoot operational issues in collaboration with the development team Investigate incidents using logs, metrics, and observability tools (e.g. Grafana, Kibana). Perform recovery actions such as restarting pods, rerunning jobs, or applying known mitigations from SOP (handbook) Operate in Kubernetes environments to inspect, debug, and manage components Support deployment activities through post-release validations and basic checks Validate data quality and flag anomalies to the relevant engineering teams Mainain clear documentation of incidents, actions taken, and resolution outcomes Communicate effectively with remote teams for operational handoffs and follow- ups Required Qualifications 3 years of experience in production operations, system support, ar devors roles. Solid Linux skills (e.g., file system navigation, log analysis, process/network troubleshooting) Experience in Java (like core and SpringBoot), React and Node js - Hands-on experience with Kuberetes and Docker in production environments Familiarity with observability tools (e g., Grafana, Kibana, Prometheus) English proficiency for reading, writing, and asynchronous communication Strong execution discipline and ability to follow structured operational procedures Preferred Qualifications Scripting ability (Python or Shell) for lod parsing and automation Basic SOL skills for data verification or debugging Experience with large scale backend service serving high QPS Experience on OpenFeature SDK or A/B Testing
Posted 2 weeks ago
10.0 - 12.0 years
0 Lacs
bengaluru, karnataka, india
On-site
At EVERSANA, we are proud to be certified as a Great Place to Work across the globe. Were fueled by our vision to create a healthier world. How Our global team of more than 7,000 employees is committed to creating and delivering next-generation commercialization services to the life sciences industry. We are grounded in our cultural beliefs and serve more than 650 clients ranging from innovative biotech start-ups to established pharmaceutical companies. Our products, services and solutions help bring innovative therapies to market and support the patients who depend on them. Our jobs, skills and talents are unique, but together we make an impact every day. Join us! Across our growing organization, we embrace diversity in backgrounds and experiences. Improving patient lives around the world is a priority, and we need people from all backgrounds and swaths of life to help build the future of the healthcare and the life sciences industry. We believe our people make all the difference in cultivating an inclusive culture that embraces our cultural beliefs. We are deliberate and self-reflective about the kind of team and culture we are building. We look for team members that are not only strong in their own aptitudes but also who care deeply about EVERSANA, our people, clients and most importantly, the patients we serve. We are EVERSANA. Position Job Description Reporting to the Senior Project Manager of Product & Engineering, Solution Architect will join a team that uses cutting-edge technology to solve challenges for both the front-end and back-end architecture and deliver exceptional experiences for clients and users. Were seeking an experienced solution architect ready to work with new technologies and architecture in a forward-thinking organization constantly pushing boundaries. The ideal candidate has experience building products across the stack and a firm understanding of web frameworks, APIs, databases, and multiple back-end languages. The person identifying, designing, and deploying advanced cloud-based solutions with optimal architectural choices across broad use cases involving performance, compute, security, privacy, as well as advanced scenarios involving serverless, analytics, and AI/ML. He / She will also lead initiatives in platform modernization, orchestration, data warehouse optimization, regulatory compliance, and real-time streaming to meet evolving business needs. Responsibilites Accountable for architectural design and documentation of all releases or enhancements to ACTICS product and platform Co-create the Product Strategy and Roadmap with Product Manager consistent with the overall strategic needs of the business division. Responsible for technical accuracy and deployment of the software application Ensure architectural compliance with regulatory frameworks (HIPAA, GDPR, HITECH, DPDP Act, etc.), internal policies, and security standards. Ability to provide insights into an enterprise-level architecture strategy that can meet both current needs and the flexibility to scale and adapt to future business requirements. Architect and design scalable, secure, cost-effective cloud solutions to address business needs. Assess the current state of solutions/platforms, define future state needs, identify gaps and recommend new technology solutions and strategic business execution improvements. Facilitate stakeholder sessions with personnel across all disciplines to identify and define requirements, assess business processes and evaluate technology solutions. Conduct solution architecture research, PoCs, PoVs, and feasibility assessment and facilitate solution application selections. Formulate business and logical data and solution architectures comprised of software, network, hardware and integration designs. Conducts impact assessments and mitigation plans. Creates and owns the Solution Architecture documentation. Demonstrate a deep understanding of the customer&aposs business model, industry trends, and technical landscape. Work with diverse departments, extracting key insights to pinpoint areas for innovation. Work with the business stakeholders, data science, development teams and product managers to enable and guide the design and development to ideate software solutions to promote platform thinking, reusability and interoperability. Champion the deployment of smart, creative, low-code solutions, leading the charge toward digital excellence. Design new features and infrastructure to support rapidly emerging business, product, and project requirements. Design client-side and server-side architecture. Ensure application performance, uptime, and scale, and maintain high standards for code quality and application design. Participate in all aspects of agile software development, including design, implementation, and deployment. Requirements 7+ years of experience in architecting and building large-scale software applications using big data technologies, scientific computing, and AI/ML solutions. 5+ years of design development of SaaS platforms 5+ years of experience as a software engineer and/or full-stack developer Experience in Solutions Architecture, with a portfolio of technological solutions that have transformed businesses. Adept at sculpting robust applications on low-code platforms. A communicator of the highest order, fluent in the language of technology and teamwork. Healthcare (Pharma, Hospitals, Payers) experience. Extensive experience in MS .NET, Snowflake, JavaScript framework Vue.js, Azure, AWS, Google Cloud, Python, R, and PostgreSQL. Deep experience with FHIR, OMOP, HL7; clinical and operational healthcare data standards Strong background in data warehouse design and optimization: Snowflake, Redshift, BigQuery. Expertise in orchestration frameworks: Airflow, Azure Data Factory, Databricks Workflows. Real-time streaming experience: Kafka, Kinesis, Spark Streaming, Flink. Experience working with a version control system, such as git. Experience with CI/CD systems, such as Azure DevOps and Gitlab CI. Strong understanding of web development/performance issues and mitigation approaches Attention to detail and ability to work simultaneously on multiple priorities. Ability to work independently and as a team player. Ability to understand others and clearly express thoughts. Ability to manage multiple concurrent objectives, projects, or activities. Strong communication, presentation, and interpersonal skills Qualifications Bachelor&aposs degree in Engineer, Technology, Computer Science, Science; Masters is desired. 10-12 years relevant industry experience (Healthcare, Pharmaceutical Consulting, Management Consulting, Hospital systems, Payers, Enterprise level data-analytical solutions) Additional Information OUR CULTURAL BELIEFS: Patient Minded I act with the patients best interest in mind. Client Delight I own every client experience and its impact on results. Take Action I am empowered and empower others to act now. Grow Talent I own my development and invest in the development of others. Win Together I passionately connect with anyone, anywhere, anytime to achieve results. Communication Matters I speak up to create transparent, thoughtful and timely dialogue. Embrace Diversity I create an environment of awareness and respect. Always Innovate I am bold and creative in everything I do. Our team is aware of recent fraudulent job offers in the market, misrepresenting EVERSANA. Recruitment fraud is a sophisticated scam commonly perpetrated through online services using fake websites, unsolicited e-mails, or even text messages claiming to be a legitimate company. Some of these scams request personal information and even payment for training or job application fees. Please know EVERSANA would never require personal information nor payment of any kind during the employment process. We respect the personal rights of all candidates looking to explore careers at EVERSANA. From EVERSANAs inception, Diversity, Equity & Inclusion have always been key to our success. We are an Equal Opportunity Employer, and our employees are people with different strengths, experiences, and backgrounds who share a passion for improving the lives of patients and leading innovation within the healthcare industry. Diversity not only includes race and gender identity, but also age, disability status, veteran status, sexual orientation, religion, and many other parts of ones identity. All of our employees points of view are key to our success, and inclusion is everyone&aposs responsibility. Follow us on LinkedIn | Twitter Show more Show less
Posted 2 weeks ago
6.0 - 10.0 years
0 Lacs
bengaluru, karnataka, india
Remote
As a global leader in cybersecurity, CrowdStrike protects the people, processes and technologies that drive modern organizations. Since 2011, our mission hasn't changed - we're here to stop breaches, and we've redefined modern security with the world's most advanced AI-native platform. We work on large scale distributed systems, processing almost 3 trillion events per day and this traffic is growing daily . Our customers span all industries, and they count on CrowdStrike to keep their businesses running, their communities safe and their lives moving forward. We're also a mission-driven company. We cultivate a culture that gives every CrowdStriker both the flexibility and autonomy to own their careers. We're always looking to add talented CrowdStrikers to the team who have limitless passion, a relentless focus on innovation and a fanatical commitment to our customers, our community and each other. Ready to join a mission that matters The future of cybersecurity starts with you. About the Role: The charter of the ML Platform team is to harness all the data that is ingested and cataloged within the Data LakeHouse for exploration, insights, model development, ML Engineering and Insights Activation. This team is situated within the larger Data Platform group, which serves as one of the core pillars of our company. We process data at a truly immense scale. Our processing is composed of various facets including threat events collected via telemetry data, associated metadata, along with IT asset information, contextual information about threat exposure based on additional processing, etc. These facets comprise the overall data platform, which is currently over 200 PB and maintained in a hyper scale Data Lakehouse, built and owned by the Data Platform team. The ingestion mechanisms include both batch and near real-time streams that form the core Threat Analytics Platform used for insights, threat hunting, incident investigations and more. As an engineer in this team, you will play an integral role as we build out our ML Experimentation Platform from the ground up. You will collaborate closely with Data Platform Software Engineers, Data Scientists & Threat Analysts to design, implement, and maintain scalable ML pipelines that will be used for Data Preparation, Cataloging, Feature Engineering, Model Training, and Model Serving that influence critical business decisions. You'll be a key contributor in a production-focused culture that bridges the gap between model development and operational success. Future plans include generative AI investments for use cases such as modeling attack paths for IT assets. Candidates must be comfortable to visit office once a week. What You'll Do: Help design, build, and facilitate adoption of a modern ML platform Modularize complex ML code into standardized and repeatable components Establish and facilitate adoption of repeatable patterns for model development, deployment, and monitoring Build a platform that scales to thousands of users and offers self-service capability to build ML experimentation pipelines Leverage workflow orchestration tools to deploy efficient and scalable execution of complex data and ML pipelines Review code changes from data scientists and champion software development best practices Leverage cloud services like Kubernetes, blob storage, and queues in our cloud first environment What You'll Need: B.S. in Computer Science, Data Science, Statistics, Applied Mathematics, or a related field and 10+ years related experience or M.S. with 8+ years of experience or Ph.D with 6+ years of experience. 3+ years experience developing and deploying machine learning solutions to production. Familiarity with typical machine learning algorithms from an engineering perspective (how they are built and used, not necessarily the theory) familiarity with supervised / unsupervised approaches: how, why, and when and labeled data is created and used 3+ years experience with ML Platform tools like Jupyter Notebooks, NVidia Workbench, MLFlow, Ray, Vertex AI etc. Experience building data platform product(s) or features with (one of) Apache Spark, Flink or comparable tools in GCP. Experience with Iceberg is highly desirable. Proficiency in distributed computing and orchestration technologies (Kubernetes, Airflow, etc.) Production experience with infrastructure-as-code tools such as Terraform, FluxCD Expert level experience with Python Java/Scala exposure is recommended. Ability to write Python interfaces to provide standardized and simplified interfaces for data scientists to utilize internal Crowdstrike tools Expert level experience with CI/CD frameworks such as GitHub Actions Expert level experience with containerization frameworks Strong analytical and problem solving skills, capable of working in a dynamic environment Exceptional interpersonal and communication skills. Work with stakeholders across multiple teams and synthesize their needs into software interfaces and processes. Critical Skills Needed for Role: Distributed Systems Knowledge Data Platform Experience Machine Learning concepts Experience with the Following is Desirable: Go Iceberg Pinot or other time-series/OLAP-style database Jenkins Parquet Protocol Buffers/GRPC #LI-DP1 #LI-VJ1 Benefits of Working at CrowdStrike: Remote-friendly and flexible work culture Market leader in compensation and equity awards Comprehensive physical and mental wellness programs Competitive vacation and holidays for recharge Paid parental and adoption leaves Professional development opportunities for all employees regardless of level or role Employee Networks, geographic neighborhood groups, and volunteer opportunities to build connections Vibrant office culture with world class amenities Great Place to Work Certified across the globe CrowdStrike is proud to be an equal opportunity employer. We are committed to fostering a culture of belonging where everyone is valued for who they are and empowered to succeed. We support veterans and individuals with disabilities through our affirmative action program. CrowdStrike is committed to providing equal employment opportunity for all employees and applicants for employment. The Company does not discriminate in employment opportunities or practices on the basis of race, color, creed, ethnicity, religion, sex (including pregnancy or pregnancy-related medical conditions), sexual orientation, gender identity, marital or family status, veteran status, age, national origin, ancestry, physical disability (including HIV and AIDS), mental disability, medical condition, genetic information, membership or activity in a local human rights commission, status with regard to public assistance, or any other characteristic protected by law. We base all employment decisions--including recruitment, selection, training, compensation, benefits, discipline, promotions, transfers, lay-offs, return from lay-off, terminations and social/recreational programs--on valid job requirements. If you need assistance accessing or reviewing the information on this website or need help submitting an application for employment or requesting an accommodation, please contact us at for further assistance.
Posted 2 weeks ago
8.0 - 10.0 years
0 Lacs
bengaluru, karnataka, india
Remote
As a global leader in cybersecurity, CrowdStrike protects the people, processes and technologies that drive modern organizations. Since 2011, our mission hasn't changed - we're here to stop breaches, and we've redefined modern security with the world's most advanced AI-native platform. We work on large scale distributed systems, processing almost 3 trillion events per day and this traffic is growing daily . Our customers span all industries, and they count on CrowdStrike to keep their businesses running, their communities safe and their lives moving forward. We're also a mission-driven company. We cultivate a culture that gives every CrowdStriker both the flexibility and autonomy to own their careers. We're always looking to add talented CrowdStrikers to the team who have limitless passion, a relentless focus on innovation and a fanatical commitment to our customers, our community and each other. Ready to join a mission that matters The future of cybersecurity starts with you. About the Role: The charter of the ML Platform team is to harness all the data that is ingested and cataloged within the Data LakeHouse for exploration, insights, model development, ML Engineering and Insights Activation. This team is situated within the larger Data Platform group, which serves as one of the core pillars of our company. We process data at a truly immense scale. Our processing is composed of various facets including threat events collected via telemetry data, associated metadata, along with IT asset information, contextual information about threat exposure based on additional processing, etc. These facets comprise the overall data platform, which is currently over 200 PB and maintained in a hyper scale Data Lakehouse, built and owned by the Data Platform team. The ingestion mechanisms include both batch and near real-time streams that form the core Threat Analytics Platform used for insights, threat hunting, incident investigations and more. As an engineer in this team, you will play an integral role as we build out our ML Experimentation Platform from the ground up. You will collaborate closely with Data Platform Software Engineers, Data Scientists & Threat Analysts to design, implement, and maintain scalable ML pipelines that will be used for Data Preparation, Cataloging, Feature Engineering, Model Training, and Model Serving that influence critical business decisions. You'll be a key contributor in a production-focused culture that bridges the gap between model development and operational success. Future plans include generative AI investments for use cases such as modeling attack paths for IT assets. Candidates must be comfortable to visit office once a week. What You'll Do: Help design, build, and facilitate adoption of a modern ML platform Modularize complex ML code into standardized and repeatable components Establish and facilitate adoption of repeatable patterns for model development, deployment, and monitoring Build a platform that scales to thousands of users and offers self-service capability to build ML experimentation pipelines Leverage workflow orchestration tools to deploy efficient and scalable execution of complex data and ML pipelines Review code changes from data scientists and champion software development best practices Leverage cloud services like Kubernetes, blob storage, and queues in our cloud first environment What You'll Need: B.S. in Computer Science, Data Science, Statistics, Applied Mathematics, or a related field and 10+ years related experience or M.S. with 8+ years of experience or Ph.D with 6+ years of experience. 3+ years experience developing and deploying machine learning solutions to production. Familiarity with typical machine learning algorithms from an engineering perspective (how they are built and used, not necessarily the theory) familiarity with supervised / unsupervised approaches: how, why, and when and labeled data is created and used 3+ years experience with ML Platform tools like Jupyter Notebooks, NVidia Workbench, MLFlow, Ray, Vertex AI etc. Experience building data platform product(s) or features with (one of) Apache Spark, Flink or comparable tools in GCP. Experience with Iceberg is highly desirable. Proficiency in distributed computing and orchestration technologies (Kubernetes, Airflow, etc.) Production experience with infrastructure-as-code tools such as Terraform, FluxCD Expert level experience with Python Java/Scala exposure is recommended. Ability to write Python interfaces to provide standardized and simplified interfaces for data scientists to utilize internal Crowdstrike tools Expert level experience with CI/CD frameworks such as GitHub Actions Expert level experience with containerization frameworks Strong analytical and problem solving skills, capable of working in a dynamic environment Exceptional interpersonal and communication skills. Work with stakeholders across multiple teams and synthesize their needs into software interfaces and processes. Critical Skills Needed for Role: Distributed Systems Knowledge Data Platform Experience Machine Learning concepts Experience with the Following is Desirable: Go Iceberg Pinot or other time-series/OLAP-style database Jenkins Parquet Protocol Buffers/GRPC #LI-DP1 #LI-VJ1 Benefits of Working at CrowdStrike: Remote-friendly and flexible work culture Market leader in compensation and equity awards Comprehensive physical and mental wellness programs Competitive vacation and holidays for recharge Paid parental and adoption leaves Professional development opportunities for all employees regardless of level or role Employee Networks, geographic neighborhood groups, and volunteer opportunities to build connections Vibrant office culture with world class amenities Great Place to Work Certified across the globe CrowdStrike is proud to be an equal opportunity employer. We are committed to fostering a culture of belonging where everyone is valued for who they are and empowered to succeed. We support veterans and individuals with disabilities through our affirmative action program. CrowdStrike is committed to providing equal employment opportunity for all employees and applicants for employment. The Company does not discriminate in employment opportunities or practices on the basis of race, color, creed, ethnicity, religion, sex (including pregnancy or pregnancy-related medical conditions), sexual orientation, gender identity, marital or family status, veteran status, age, national origin, ancestry, physical disability (including HIV and AIDS), mental disability, medical condition, genetic information, membership or activity in a local human rights commission, status with regard to public assistance, or any other characteristic protected by law. We base all employment decisions--including recruitment, selection, training, compensation, benefits, discipline, promotions, transfers, lay-offs, return from lay-off, terminations and social/recreational programs--on valid job requirements. If you need assistance accessing or reviewing the information on this website or need help submitting an application for employment or requesting an accommodation, please contact us at for further assistance.
Posted 2 weeks ago
0.0 years
0 Lacs
noida, uttar pradesh, india
On-site
Why Join Us Are you inspired to grow your career at one of India's Top 25 Best Workplaces in IT industry Do you want to do the best work of your life at one of the fastest growing IT services companies Do you aspire to thrive in an award-winning work culture that values your talent and career aspirations It's happening right here at Iris Software. About Iris Software At Iris Software, our vision is to be our client's most trusted technology partner, and the first choice for the industry's top professionals to realize their full potential. With over 4,300 associates across India, U.S.A, and Canada, we help our enterprise clients thrive with technology-enabled transformation across financial services, healthcare, transportation & logistics, and professional services. Our work covers complex, mission-critical applications with the latest technologies, such as high-value complex Application & Product Engineering, Data & Analytics, Cloud, DevOps, Data & MLOps, Quality Engineering, and Business Automation. Working at Iris Be valued, be inspired, be your best. At Iris Software, we invest in and create a culture where colleagues feel valued, can explore their potential, and have opportunities to grow. Our employee value proposition (EVP) is about Being Your Best - as a professional and person. It is about being challenged by work that inspires us, being empowered to excel and grow in your career, and being part of a culture where talent is valued. We're a place where everyone can discover and be their best version. Job Description Job Summary: We are seeking a highly skilled and experienced Lead Data Engineer to join our data engineering team. The ideal candidate will have a strong background in designing and deploying scalable data pipelines using Azure technologies, Spark, Flink, and modern data lakehouse architectures. This role demands hands-on technical expertise, leadership in managing offshore teams, and a strategic mindset to drive data-driven decision-making across financial and regulatory domains Key Responsibilities: . Design, develop, and deploy scalable batch and streaming data pipelines using PySpark, Flink, Scala, SQL, and Redis. . Lead migration of complex on-premise workflows to Azure cloud ecosystem (Databricks, ADLS, Azure Data Factory), optimizing infrastructure and deployment processes. . Implement performance tuning strategies to reduce job runtimes and enhance data reliability, including optimization of Unity Catalog tables. . Collaborate with product stakeholders to deliver high-priority data features and ensure alignment with business goals. . Manage and mentor an 8-member offshore team, fostering best practices in data engineering and agile development. . Conduct internal training sessions on modern data architecture, cloud-native deployments, and data engineering best practices. Required Skills & Technologies: . Big Data Tools: PySpark, Spark, Flink, Hive, Hadoop, Delta Lake, Streaming, ETL . Cloud Platforms: Azure (ADF, Databricks, ADLS, Event Hub), AWS (S3) . Orchestration & DevOps: Airflow, Docker, Kubernetes, GitHub Actions, Jenkins . Programming Languages: Python, Scala, SQL, Shell . Other Tools: Redis, Solace, MQ, Kafka, Grafana, Postman . Soft Skills: Team Leadership, Agile Methodologies, Stakeholder Management, Technical Training Certifications (Good to have): . Databricks Certified: Data Engineer Associate, Lakehouse Fundamentals . Microsoft Certified: Azure Fundamentals (AZ-900), Azure Data Fundamentals (DP-900) Preferred Qualifications: . Bachelor's degree in Engineering (E.C.E.) with strong academic performance. Proven experience in financial data pipelines, regulatory reporting, and risk analytics Mandatory Competencies Big Data - Big Data - Pyspark Big Data - Big Data - SPARK Beh - Communication Big Data - Big Data - HIVE Programming Language - Scala - Scala Big Data - Big Data - Hadoop DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - Containerization (Docker, Kubernetes) DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - Jenkins Cloud - AWS - AWS S3, S3 glacier, AWS EBS DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - GitLab,Github, Bitbucket Perks and Benefits for Irisians At Iris Software, we offer world-class benefits designed to support the financial, health and well-being needs of our associates to help achieve harmony between their professional and personal growth. From comprehensive health insurance and competitive salaries to flexible work arrangements and ongoing learning opportunities, we're committed to providing a supportive and rewarding work environment. Join us and experience the difference of working at a company that values its employees success and happiness.
Posted 2 weeks ago
8.0 - 10.0 years
0 Lacs
thane, maharashtra, india
On-site
Who is Forcepoint Forcepoint simplifies security for global businesses and governments. Forcepoint's all-in-one, truly cloud-native platform makes it easy to adopt Zero Trust and prevent the theft or loss of sensitive data and intellectual property no matter where people are working. 20+ years in business. 2.7k employees. 150 countries. 11k+ customers. 300+ patents. If our mission excites you, you're in the right place we want you to bring your own energy to help us create a safer world. All we're missing is you! Senior Software Engineer - Dashboarding, Reporting & Data Analytics Location: Mumbai (Preferred) Experience: 8-10 years Job Summary: We are looking for a Senior Software Engineer with expertise in dashboarding, reporting applications, and data analytics . The ideal candidate should have strong programming skills in Golang and Java , experience working with AWS services like Kinesis, Redshift, and Elasticsearch , and the ability to build scalable, high-performance data pipelines and visualization tools . This role is critical in delivering data-driven insights that help businesses make informed decisions. Key Responsibilities: Design, develop, and maintain dashboards and reporting applications for real-time and batch data visualization. Build data pipelines and analytics solutions leveraging AWS services like Kafka, Redshift, Elasticsearch, Glue, and S3 . Work with data engineering teams to integrate structured and unstructured data for meaningful insights. Optimize data processing workflows for efficiency and scalability. Develop APIs and backend services using Golang and Java to support reporting and analytics applications. Collaborate with business stakeholders to gather requirements and deliver customized reports and visualizations. Implement data security, governance, and compliance best practices . Conduct code reviews, troubleshooting, and performance tuning . Stay updated with emerging data analytics and cloud technologies to drive innovation. Required Qualifications: 8-10 years of experience in software development, data analytics, and dashboarding/reporting applications . Proficiency in Golang and Java for backend development. Strong expertise in AWS data services (Kinesis, Redshift, Elasticsearch, S3, Glue). Experience with data visualization tools (Grafana, Tableau, Looker, or equivalent). Proficiency in SQL and NoSQL databases , with a solid understanding of data modeling and performance optimization . Experience in building and managing scalable data pipelines . Experience in big data processing frameworks (Spark, Flink). Strong problem-solving skills with a focus on efficiency and performance . Excellent communication and collaboration skills. Preferred Qualifications: Experience with real-time data streaming and event-driven architectures. Familiarity with big data processing frameworks (Spark, Flink). Experience in CI/CD pipelines and DevOps practices .
Posted 2 weeks ago
6.0 - 12.0 years
0 Lacs
bengaluru, karnataka, india
On-site
About the Role We are looking for a passionate and experienced Software Engineer (E5 / E6 level) to join our Enterprise Search team, which is at the core of redefining how users discover and interact with information across Whatfix's digital adoption platform. This is a unique opportunity to solve deep information retrieval and search relevance challenges using scalable infrastructure, cutting-edge NLP, and Generative AI. As an engineer at this level, you'll be expected to operate with strong ownership, lead cross-team technical initiatives, and influence design choices that directly impact user experience and business outcomes. What You'll Do As a senior engineer, you will: Build a 0-to-1 Enterprise Search product with a strong focus on scalability, performance, and relevance. Lead proof-of-concept efforts to validate ideas quickly and align with business goals. Architect and implement robust, maintainable, and scalable systems for indexing, querying, and ranking. Develop data pipelines, implement automation for reliability, and ensure strong observability and monitoring. Work closely with Product Managers and Designers to translate user needs into data-driven, intuitive search experiences. Guide and support junior engineers through code reviews, technical direction, and best practices. Collaborate with cross-functional teams (data, platform, infra) to deliver cohesive and high-impact solutions. What We're Looking For Must-Have Skills: Familiarity with LLMs, RAG pipelines, or knowledge graph integrations. Deep expertise in information retrieval, search engines (Lucene, Elasticsearch, Solr). Experience with vector search, embeddings, and/or neural ranking models (e.g., BERT, Sentence Transformers). Strong programming skills in Java, Python, or Go. Familiarity with scalable data processing frameworks (e.g., Spark, Kafka, Flink). Good understanding of system design, APIs, caching, and performance tuning. Nice-to-Have: Experience with enterprise content connectors (SharePoint, Confluence, Jira, etc.). Experience working in a SaaS, B2B, or product-first environment. Qualifications 6-10+ years of experience building backend systems, infrastructure, or AI platforms at scale. Proven ability to own and deliver complex features independently, collaborate across teams, and mentor peers in a fast-paced environment. Demonstrated experience leading initiatives with significant technical and organizational impact - from setting direction to aligning stakeholders and driving execution.
Posted 2 weeks ago
7.0 - 9.0 years
0 Lacs
india
On-site
About Fam (previously FamPay) Fam is India's first payments app for everyone above 11. FamApp helps make online and offline payments through UPI and FamCard. We are on a mission to raise a new, financially aware generation, and drive 250 million+ youngest users in India to kickstart their financial journey super early in their life. About this Role: We are looking for high-impact Technical Lead Manager to drive the development of scalable, high-performance systems for our fintech platform. You will play a crucial role in architecting, building, and optimizing distributed systems, data pipelines, and query performance. If you love solving complex engineering challenges and want to shape the future of financial technology, this role is for you. On the Job Lead and mentor a team of engineers, ensuring best practices in software development, high level system design, and low level implementation design Design and implement scalable, resilient, and high-performance distributed systems. Own technical decisions related to high level system design, infrastructure, and performance optimizations. Build and optimize large-scale data pipelines and query performance for efficient data processing. Work closely with product managers and stakeholders to translate business requirements into robust technical solutions. Ensure best practices in code quality, testing, CI/CD, and deployment strategies. Continuously improve system reliability, security, and scalability to handle fintech-grade workloads. Own the functional reliability and uptime of 24x7 live services, ensuring minimal downtime and quick incident resolution. Champion both functional and non-functional quality attributes, such as performance, availability, scalability, and maintainability. Must-haves (Min. qualifications) 7+ years of hands-on software development experience with a track record of building scalable and distributed systems. Minimum 1 year of experience managing/mentoring a team Should have worked on consumer-facing systems in a B2C environment. Expertise in high level system design and low level implementation design, with experience handling high-scale, low-latency applications. Strong coding skills in languages like Java, Go, Python, or Kotlin. Experience with databases and query optimizations, including PostgreSQL, MySQL, or NoSQL (DynamoDB, Cassandra etc.). Deep understanding of data pipelines, event-driven systems, and distributed messaging systems like Kafka, RabbitMQ, etc. Proficiency in cloud platforms like AWS and infrastructure-as-code (Terraform, Kubernetes). Strong problem-solving and debugging skills with a passion for performance optimization. Good to have Prior experience in fintech, payments, lending, or banking domains. Exposure to real-time analytics and big data technologies (Spark, Flink, Presto, Clickhouse, etc.). Experience with containerization and microservices architecture. Open-source contributions or active participation in tech communities. Why join us Be part of an early-stage fintech startup solving real-world financial challenges. Work on cutting-edge technology that handles millions of transactions daily. Opportunity to lead and grow in a high-impact leadership role. Collaborate with a world-class team of engineers, product leaders, and fintech experts. In-person role in Bengaluru - an exciting, fast-paced work environment! Collaborating directly with(Co-founder) and (Founding Team - Head, Engineering) to lead the engineering team, develop scalable solutions, and ensure seamless delivery of key projects. Perks That Go Beyond the Paycheck . Relocation assistance to make your move seamless. . Free office meals (lunch & dinner). . Generous leave policy, including birthday leave, period leave, paternity and maternity support, and more. . Salary advance and loan policies for any financial help. . Quarterly rewards and recognition programs, and a referral program with great incentives. . Access the latest gadgets and tools. . Comprehensive health insurance for you and your family, mental health support. . Tax benefits with options like food coupons, phone allowances, car/device leasing. . Retirement perks like PF contribution, leave encashment and gratuity. Here's all the tea on FamApp ?? FamApp focuses on financial inclusion of the next generation by providing UPI & card payments to everyone above 11 years old. Our flagship Spending Account, FamX, seamlessly integrates UPI and card payments, enabling users to manage, save, and learn about their finances effortlessly. Revolutionizing Payments and FinTech FamApp has enabled 10 million+ users to make UPI and card payments across India, removing the inconvenience of carrying cash everywhere. Users get to customise their FamX card with doodles, which lets them add a personal touch to their payments. Trusted by leading investors We're proud to be supported by renowned investors like Elevation Capital, Y-Combinator, Peak XV (formerly Sequoia Capital India), Venture Highway, Global Founder's Capital, and esteemed angels Kunal Shah and Amrish Rao. Join Our Dynamic Team At Fam, our people-first approach is reflected in our generous leave policies, flexible work schedules, comprehensive health benefits, and free mental health sessions. We don't mean to brag, but we promise you'll be surrounded by some of the most fun, talented and passionate people in the startup space. Want to see what makes life at Fam so awesome Check out our shenanigans at ????
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |