Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
0.0 - 10.0 years
0 Lacs
Pune, Maharashtra
On-site
You deserve to do what you love, and love what you do – a career that works as hard for you as you do. At Fiserv, we are more than 40,000 #FiservProud innovators delivering superior value for our clients through leading technology, targeted innovation and excellence in everything we do. You have choices – if you strive to be a part of a team driven to create with purpose, now is your chance to Find your Forward with Fiserv. Responsibilities Requisition ID R-10356381 Date posted 06/12/2025 End Date 06/27/2025 City Pune State/Region Maharashtra Country India Location Type Onsite Calling all innovators – find your future at Fiserv. We’re Fiserv, a global leader in Fintech and payments, and we move money and information in a way that moves the world. We connect financial institutions, corporations, merchants, and consumers to one another millions of times a day – quickly, reliably, and securely. Any time you swipe your credit card, pay through a mobile app, or withdraw money from the bank, we’re involved. If you want to make an impact on a global scale, come make a difference at Fiserv. Job Title Tech Lead, Data Architecture What does a successful Snowflakes Advisor do? We are seeking a highly skilled and experienced Snowflake Advisor to take ownership of our data warehousing strategy, implementation, maintenance and support. In this role, you will design, develop, and lead the adoption of Snowflake-based solutions to ensure scalable, efficient, and secure data systems that empower our business analytics and decision-making processes. As a Snowflake Advisor, you will collaborate with cross-functional teams, lead data initiatives, and act as the subject matter expert for Snowflake across the organization. What you will do: Define and implement best practices for data modelling, schema design, query optimization in Snowflakes Develop and manage ETL/ELT workflows to ingest, transform and load data into Snowflakes from various resources Integrate data from diverse systems like databases, API`s, flat files, cloud storage etc. into Snowflakes. Using tools like Streamsets, Informatica or dbt to streamline data transformation processes Monitor or tune Snowflake performance including warehouse sizing, query optimizing and storage management. Manage Snowflakes caching, clustering and partitioning to improve efficiency Analyze and resolve query performance bottlenecks Monitor and resolve data quality issues within the warehouse Collaboration with data analysts, data engineers and business users to understand reporting and analytic needs Work closely with DevOps team for Automation, deployment and monitoring Plan and execute strategies for scaling Snowflakes environments as data volume grows Monitor system health and proactively identify and resolve issues Implement automations for regular tasks Enable seamless integration of Snowflakes with BI Tools like Power BI and create Dashboards Support ad hoc query requests while maintaining system performance Creating and maintaining documentation related to data warehouse architecture, data flow, and processes Providing technical support, troubleshooting, and guidance to users accessing the data warehouse Optimize Snowflakes queries and manage Performance Keeping up to date with emerging trends and technologies in data warehousing and data management Good working knowledge of Linux operating system Working experience on GIT and other repository management solutions Good knowledge of monitoring tools like Dynatrace, Splunk Serve as a technical leader for Snowflakes based projects, ensuring alignment with business goals and timelines Provide mentorship and guidance to team members in Snowflakes implementation, performance tuning and data management Collaborate with stakeholders to define and prioritize data warehousing initiatives and roadmaps. Act as point of contact for Snowflakes related queries, issues and initiatives What you will need to have: Must have 8 to 10 years of experience in data management tools like Snowflakes, Streamsets, Informatica Should have experience on monitoring tools like Dynatrace, Splunk. Should have experience on Kubernetes cluster management CloudWatch for monitoring and logging and Linux OS experience Ability to track progress against assigned tasks, report status, and proactively identifies issues. Demonstrate the ability to present information effectively in communications with peers and project management team. Highly Organized and works well in a fast paced, fluid and dynamic environment. What would be great to have: Experience in EKS for managing Kubernetes cluster Containerization technologies such as Docker and Podman AWS CLI for command-line interactions CI/CD pipelines using Harness S3 for storage solutions and IAM for access management Banking and Financial Services experience Knowledge of software development Life cycle best practices Thank you for considering employment with Fiserv. Please: Apply using your legal name Complete the step-by-step profile and attach your resume (either is acceptable, both are preferable). Our commitment to Diversity and Inclusion: Fiserv is proud to be an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, gender, gender identity, sexual orientation, age, disability, protected veteran status, or any other category protected by law. Note to agencies: Fiserv does not accept resume submissions from agencies outside of existing agreements. Please do not send resumes to Fiserv associates. Fiserv is not responsible for any fees associated with unsolicited resume submissions. Warning about fake job posts: Please be aware of fraudulent job postings that are not affiliated with Fiserv. Fraudulent job postings may be used by cyber criminals to target your personally identifiable information and/or to steal money or financial information. Any communications from a Fiserv representative will come from a legitimate Fiserv email address.
Posted 6 days ago
5.0 years
0 Lacs
Pune, Maharashtra
On-site
Job details Employment Type: Full-Time Location: Pune, Maharashtra, India Job Category: Innovation & Technology Job Number: WD30240361 Job Description Job Title: ML Platform Engineer – AI & Data Platforms ML Platform Engineering & MLOps (Azure-Focused) Build and manage end-to-end ML/LLM pipelines on Azure ML using Azure DevOps for CI/CD, testing, and release automation. Operationalize LLMs and generative AI solutions (e.g., GPT, LLaMA, Claude) with a focus on automation, security, and scalability. Develop and manage infrastructure as code using Terraform, including provisioning compute clusters (e.g., Azure Kubernetes Service, Azure Machine Learning compute), storage, and networking. Implement robust model lifecycle management (versioning, monitoring, drift detection) with Azure-native MLOps components. Infrastructure & Cloud Architecture Design highly available and performant serving environments for LLM inference using Azure Kubernetes Service (AKS) and Azure Functions or App Services. Build and manage RAG pipelines using vector databases (e.g., Azure Cognitive Search, Redis, FAISS) and orchestrate with tools like LangChain or Semantic Kernel. Ensure security, logging, role-based access control (RBAC), and audit trails are implemented consistently across environments. Automation & CI/CD Pipelines Build reusable Azure DevOps pipelines for deploying ML assets (data pre-processing, model training, evaluation, and inference services). Use Terraform to automate provisioning of Azure resources, ensuring consistent and compliant environments for data science and engineering teams. Integrate automated testing, linting, monitoring, and rollback mechanisms into the ML deployment pipeline. Collaboration & Enablement Work closely with Data Scientists, Cloud Engineers, and Product Teams to deliver production-ready AI features. Contribute to solution architecture for real-time and batch AI use cases, including conversational AI, enterprise search, and summarization tools powered by LLMs. Provide technical guidance on cost optimization, scalability patterns, and high-availability ML deployments. Qualifications & Skills Required Experience Bachelor’s or Master’s in Computer Science, Engineering, or a related field. 5+ years of experience in ML engineering, MLOps, or platform engineering roles. Strong experience deploying machine learning models on Azure using Azure ML and Azure DevOps. Proven experience managing infrastructure as code with Terraform in production environments. Technical Proficiency Proficiency in Python (PyTorch, Transformers, LangChain) and Terraform, with scripting experience in Bash or PowerShell. Experience with Docker and Kubernetes, especially within Azure (AKS). Familiarity with CI/CD principles, model registry, and ML artifact management using Azure ML and Azure DevOps Pipelines. Working knowledge of vector databases, caching strategies, and scalable inference architectures. Soft Skills & Mindset Systems thinker who can design, implement, and improve robust, automated ML systems. Excellent communication and documentation skills—capable of bridging platform and data science teams. Strong problem-solving mindset with a focus on delivery, reliability, and business impact. Preferred Qualifications Experience with LLMOps, prompt orchestration frameworks (LangChain, Semantic Kernel), and open-weight model deployment. Exposure to smart buildings, IoT, or edge-AI deployments. Understanding of governance, privacy, and compliance concerns in enterprise GenAI use cases. Certification in Azure (e.g., Azure Solutions Architect, Azure AI Engineer, Terraform Associate) is a plus.
Posted 6 days ago
5.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Job Information Company Accumn Date Opened 06/12/2025 Job Type Full time Industry IT Services City Bangalore State/Province Karnataka Country India Zip/Postal Code 560001 About Us Yubi stands for ubiquitous. But Yubi will also stand for transparency, collaboration, and the power of possibility. From being a disruptor in India’s debt market to marching towards global corporate markets from one product to one holistic product suite with seven products Yubi is the place to unleash potential. Freedom, not fear. Avenues, not roadblocks. Opportunity, not obstacles. Job Description Yubi, formerly known as CredAvenue, is re-defining global debt markets by freeing the flow of finance between borrowers, lenders, and investors. We are the world's possibility platform for the discovery, investment, fulfillment, and collection of any debt solution. At Yubi, opportunities are plenty and we equip you with tools to seize it. In March 2022, we became India's fastest fintech and most impactful startup to join the unicorn club with a Series B fundraising round of $137 million. In 2020, we began our journey with a vision of transforming and deepening the global institutional debt market through technology. Our two-sided debt marketplace helps institutional and HNI investors find the widest network of corporate borrowers and debt products on one side and helps corporates to discover investors and access debt capital efficiently on the other side. Switching between platforms is easy, which means investors can lend, invest and trade bonds - all in one place. All of our platforms shake up the traditional debt ecosystem and offer new ways of digital finance. Yubi Credit Marketplace - With the largest selection of lenders on one platform, our credit marketplace helps enterprises partner with lenders of their choice for any and all capital requirements. Yubi Invest - Fixed income securities platform for wealth managers & financial advisors to channel client investments in fixed income Financial Services Platform - Designed for financial institutions to manage co-lending partnerships & asset based securitization Spocto - Debt recovery & risk mitigation platform Corpository - Dedicated SaaS solutions platform powered by Decision-grade data, Analytics, Pattern Identifications, Early Warning Signals and Predictions to Lenders, Investors and Business Enterprises So far, we have on-boarded over 17000+ enterprises, 6200+ investors & lenders and have facilitated debt volumes of over INR 1,40,000 crore. Backed by marquee investors like Insight Partners, B Capital Group, Dragoneer, Sequoia Capital, LightSpeed and Lightrock, we are the only-of-its-kind debt platform globally, revolutionizing the segment. At Yubi, People are at the core of the business and our most valuable assets. Yubi is constantly growing, with 1000+ like-minded individuals today, who are changing the way people perceive debt. We are a fun bunch who are highly motivated and driven to create a purposeful impact. Come, join the club to be a part of our epic growth story. Role and Responsibilities Developing a revolutionary finance marketplace product that includes design, user experience, and business logic to ensure the product is easy to use, appealing, and effective. Ensure that the implementation adheres to defined specs and processes in the PRD Own end-to-end quality of deliverables during all phases of the software development lifecycle. Work with managers, leads and peers to come up with implementation options. Ability to function effectively in a fast-paced environment and manage continuously changing business needs Mentor junior engineers and foster innovation within the team. Design and develop the pod’s software components and systems. Evaluate and recommend tools, technologies, and processes, driving adoption to ensure high-quality products. Requirements Minimum 5+ years of experience in Backend development, delivering enterprise-class web applications and services. Expertise in Java technologies including Spring, Hibernate, and Kafka. Strong knowledge of NoSQL and RDBMS, with expertise in schema design Familiarity with Kubernetes deployment and managing CI/CD pipelines. Ability to function effectively in a fast-paced environment and manage continuously changing business needs. Experience with microservices architecture and RESTful APIs. Familiarity with monitoring and logging tools (Prometheus, Grafana, ELK stack). Competent in software engineering tools (e.g., Java build tools) and best practices (e.g., unit testing, test automation, continuous integration). Experience with the Cloud technologies of AWS and GCP and developing secure applications Strong understanding of the software development lifecycle and agile methodologies
Posted 6 days ago
5.0 years
0 Lacs
Hadapsar, Pune, Maharashtra
On-site
Job Title: IT Engineer/ System Engineer Experience Required: 5+ Years Location: Pune Employment Type: Full-Time Required Skills & Qualifications: 5–7+ years of experience in IT infrastructure and security roles. Strong expertise in Windows Server, Active Directory, FortiGate, and M365. Hands-on experience with firewall, VPN, NAS, and endpoint protection. Familiar with ISP tools, ticket logging, and vendor coordination. Strong problem-solving and troubleshooting skills. Preferred Certifications: Microsoft Certified (Server / M365) Fortinet NSE (any level) Sophos Administrator or Engineer CCNA Soft Skills: Excellent communication and coordination skills. Self-motivated, analytical, and team oriented. High attention to detail and structured work style. Job Type: Full-time Pay: Up to ₹650,000.00 per year Schedule: Monday to Friday Education: Bachelor's (Preferred) Location: Hadapsar, Pune, Maharashtra (Preferred) Willingness to travel: 25% (Preferred) Work Location: In person Speak with the employer +91 9270001751 Expected Start Date: 01/07/2025
Posted 6 days ago
7.0 years
0 Lacs
Gurugram, Haryana, India
Remote
IMEA (India, Middle East, Africa) India LIXIL INDIA PVT LTD Employee Assignment Fully remote possible Full Time 30 June 2025 Role Description We are seeking a skilled SAP ABAP Technical Developer to join our team. The ideal candidate will possess strong technical development expertise coupled with a deep understanding of SAP functional processes. This role requires the ability to handle complex enhancements, integrations, and development of end-to-end applications from SAP backend to Fiori frontend, with a focus on performance optimization and adherence to best practices. Key Accountabilities End-to-End Application Development Design, develop, and maintain custom ABAP WRICEF according to business requirements Design and develop full-stack applications from SAP backend (ABAP) to frontend (Fiori/UI5) Apply performance optimization techniques for both backend and frontend components Implement proper error handling and logging mechanisms across the application stack Clean Core Implementation Apply the Clean Core strategy to maintain a system architecture free from unnecessary customizations Develop extensions that are strictly separate from the SAP standard application Implement cloud-compliant extensions and integrations that support future upgradability Understand and apply side-by-side extensibility concepts using SAP BTP Ensure modifications follow the Clean Core principles to reduce technical debt Apply appropriate extension methods based on business requirements (in-app, side-by-side, etc.) Requirements Education A Bachelor’s degree in Computer Science or related field. Experience may be substituted for education Work experience 7+ years of experience in ABAP development including: Core ABAP programming (WRICEF, BAPI, ALE/IDocs, RFC, etc.) Object-Oriented ABAP ABAP CDS Views and OData services SAP Fiori/UI5 development ABAP on HANA optimization techniques WebDynpro ABAP (desirable) Experience with StreamServe Experience working in a global organization SCRUM Knowledge and Experience Skills Fluent business English, both verbally as well as in written communication Show more Show less
Posted 6 days ago
6.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Job Title : Networking Security Engineer Experience : 6 to 10 years Location : Trivandrum About Role Network Security engineer to help us maintain the teams high level of service and keep our network security aligned with business demands. This role will find and resolve issues with our current network and improve these systems where appropriate. Strong interpersonal skills will be crucial for this role, enabling communication and collaboration with colleagues from across the company Roles And Responsibilities Develop and maintain device hardening standards Deliver network compliance reports Monitor the companys intrusion detection systems Prepare detailed reports regarding intrusions and events Assess and analyze anomalous network and system activity Deploy and administer network access control lists, firewall rule sets, VPNs, and NACs Maintain in-house compliance monitoring systems using scripting and an SQL interface Identify and implement improvements to our network assessment processes Develop a logging pipeline for new security appliances Review connectivity requests to ensure compliance with in-house and external policies Consult with internal IT groups regarding security when implementing new and existing network technologies Troubleshoot client issues related to security Key Skills 6+ years of experience in information technology 5+ years of experience with information security 2+ years of experience with scripting languages, such as Shell, Perl, or Expect Experience with Cisco LAN/WAN network engineering Experience with the Unix command line Experience deploying host based mitigation tools Experience troubleshooting network issues Experience configuring Access Control Lists, firewalls, and routers Experience operating a DMZ network Experience with routing protocols, such as BGP or OSPF Thorough knowledge of information security principles Knowledge of network hardware devices Knowledge of network services, exploits, vulnerabilities, and attacks Knowledge of ACS external database authentication methods Strong communication skills and ability to convey technical information to clients (ref:hirist.tech) Show more Show less
Posted 6 days ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Key Skills : Asp.Net/.Net Core MVC, Entity Framework, Web API, LINQ, SQL Server 2012 & Azure, Analytical Thinking Job Description 5+ years of experience in web development using ASP.NET/.Net Core MVC SQL Server 2012 and above Design patterns and practices, object-oriented programming, databases, SQL, web programming, SOLID principles, cloud technologies (AWS, Azure, etc.) Experience with Cloud Technologies and Frameworks like Serverless Programming, preferably Azure 3+ years REST services, SOA, Web Apis with SQL Server 3+ years' experience working in agile methodologies (Scrum, Kanban) DevOps mindset 3+ years' experience in a mature CI/CD SDLC environment, implemented exception handling, logging, monitoring, performance measurement, operational metrics knowledge Adheres to development deadlines and schedules Understanding of performance enhancement best practices Ability to work individually & follow industry standards (ref:hirist.tech) Show more Show less
Posted 6 days ago
10.0 years
0 Lacs
Greater Ahmedabad Area
Remote
Job Title : Engineering Manager Experience : 10+ Years Location : Ahmedabad Department : Engineering Management About Simform Simform is a premier digital engineering company specializing in Cloud, Data, AI/ML, and Experience Engineering to create seamless digital experiences and scalable products. Simform is a strong partner for Microsoft, AWS, Google Cloud, and Databricks. With a presence in 5+ countries, Simform primarily serves North America, the UK, and the Northern European market. Simform takes pride in being one of the most reputed employers in the region, having created a thriving work culture with a high work-life balance that gives a sense of freedom and opportunity to grow. Role Overview We are seeking an experienced Engineering Manager to lead and execute complex technical projects for large-scale client accounts. This role requires a blend of strong technical leadership, hands-on engineering capabilities, and strategic project oversight. You will work closely with cross-functional teams including development, QA, DevOps, and architecture leads to design and deliver robust, scalable, and secure software solutions. The ideal candidate has deep technical expertise in backend and cloud technologies, strong stakeholder management skills, and a track record of driving engineering excellence across distributed teams in fast-paced environments. This role also involves contributing to pre-sales efforts, internal capability building, and enforcing best practices across project lifecycles. Key Responsibilities Lead the delivery of large, technically complex projects by designing, validating, and optimizing technical architectures across diverse tech stacks. Translate functional requirements into technical solutions for development teams, assisting with implementation and troubleshooting while acting as the project owner. Identify delivery risks, technical bottlenecks, or resource constraints early and implement mitigation strategies in collaboration with relevant stakeholders. Track and report on engineering KPIs such as sprint velocity, defect leakage, and deployment frequency to ensure quality and timely delivery. Work with Project Managers focusing on PoC, Prototyping and Technical Solution or solely manage the overall project, as needed. Maintain a hands-on approach to technology, with the ability to perform code analysis, reviews, audits, and troubleshooting. Ensure adherence to engineering best practices and enforce secure coding standards across project SDLC. Collaborate with QA team to define test cases and review/validate test scripts, test results ensuring comprehensive functional and non-functional testing. Advocate for process improvements, technical proof of concepts (PoCs), and the reduction of technical debt. Nurture and grow client accounts by ensuring optimised and robust solution delivery with highest quality standards. Serve as a liaison between technical and business stakeholders facilitating clear communication and alignment. Provide technical support for pre-sales initiatives and client interactions. Help define and implement architectural standards, guidelines, principles, guardrails, and governance practices working with different Tech Stack Leads to drive consistency and quality across projects. Contribute to internal initiatives such as technical training, building accelerators, managing technical audits, and creating reusable components. Required Skills And Qualifications 10+ years of technical experience in web/cloud/mobile application development with a broad range of backend technologies and in-depth expertise in at least one backend language (e.g. Node.js, Python, .NET, PHP, etc.) and cloud platforms (AWS, Azure or GCP). 2+ years of experience in engineering team management, technical project management, or large multi-team customer account management. Strong knowledge of system design principles including security, scalability, caching, availability, fault tolerance, performance optimization, observability (logging, alerting and monitoring) and maintainability. Hands-on expertise in at least one backend tech stack, with the ability to conduct code reviews, audits, and deep troubleshooting. Proven experience in designing and delivering robust, secure, and highly optimized production-grade software systems at scale. In-depth, hands-on understanding of cloud services compute, storage, networking, security and cloud-native solution design on AWS, Azure, or GCP. Familiarity with DevOps practices and CI/CD pipelines including tools such as Jenkins, GitLab CI, GitHub Actions, or similar. Strong interpersonal skills and stakeholder management capabilities. Excellent verbal and written communication skills; capable of mentoring, stakeholder presentation, and influencing technical teams and other stakeholders. Demonstrated ability to collaborate cross-functionally with technical and non-technical, internal and external teams to ensure end-to-end delivery. Solution-oriented mindset with the ability to drive incremental technical execution in the face of ambiguity and constraints. Strong understanding of Agile/Scrum methodologies with experience leading Agile teams, ceremonies, and sprint planning. Understanding of architectural documentation and artifacts such as HLD, LLD, architecture diagrams, entity relationship diagrams (ERDs), process flows, and sequence diagrams. Awareness of compliance, data privacy, and regulatory frameworks such as GDPR, HIPAA, SOC 2. Working knowledge of frontend technologies (e.g., React, Angular) and how they integrate with backend and cloud components. Strong adaptability and a continuous learning mindset in fast-paced, high-growth environments. Preferred Skills Certifications in cloud architecture (e.g., AWS Certified Solutions Architect, Azure Solutions Architect Expert, or equivalent) are a plus. Exposure in diverse range of projects including cutting edge technologies, such as Data Engineering, AI or ML. Knowledge of various testing tools and frameworks, e.g. JMeter, LoadRunner or equivalent. Familiarity with Mobile Testing frameworks, e.g. Appium, Calabash or equivalent. Experience with SaaS platforms or multi-tenant architecture is a strong plus. Skills Technical Project Management, Engineering Management, Application Development, Team Building, Training and Development, System Design, Solution Architecture, Azure, AWS,Python/Node.js/.NET/PHP/MEAN , DevOps, CICD, Cloud-Native Design, Microservices, Event Driven and Serverless Architecture Why Join Us Young Team, Thriving Culture Flat-hierarchical, friendly, engineering-oriented, and growth-focused culture. Well-balanced learning and growth opportunities Free health insurance. Office facilities with a game zone, in-office kitchen with affordable lunch service, and free snacks. Sponsorship for certifications/events and library service. Flexible work timing, leaves for life events, WFH and hybrid options (ref:hirist.tech) Show more Show less
Posted 6 days ago
0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Selected Intern's Day-to-day Responsibilities Include CI/CD Pipeline Support: Assist in designing, implementing, and maintaining continuous integration and deployment pipelines to streamline the software delivery process. Infrastructure Automation: Learn and support the development of Infrastructure as Code(IaC) using tools like Terraform, CloudFormation, or Ansible. Cloud Infrastructure: Support the deployment and management of cloud-based resources on platforms like AWS, Azure, or Google Cloud under guidance. Monitoring & Logging: Assist in setting up and maintaining monitoring, logging, and alerting systems using tools like Prometheus, Grafana, ELK Stack, or Splunk. Configuration Management: Gain exposure to tools like Ansible, Chef, or Puppet to manage system configurations and ensure environment consistency. Containerization & Orchestration: Learn to build and manage Docker containers and understand container orchestration using Kubernetes or similar platforms. Collaboration & Troubleshooting: Work closely with cross-functional teams to understand system requirements, resolve issues, and ensure high system availability. Version Control: Use Git for source code management and learn standard Git workflows as part of the development lifecycle. Required Skills And Qualifications Bachelor’s degree (or pursuing final year) in Computer Science, Information Technology, or a related discipline. Basic understanding of DevOps principles and cloud technologies. Exposure to at least one scripting or programming language (e.g., Python, Bash). Familiarity with Linux/Unix environments. Understanding of version control systems like Git. Eagerness to learn tools like Jenkins, Docker, Kubernetes, and Terraform. Strong problem-solving skills and willingness to work in a collaborative environment About Company: Monkhub is a digital innovation company. We are passionate about developing and delivering great services. We use design thinking, creativity, innovation, and an analytical approach to solve complex problems and create a timeless experience that helps our partners positively impact their businesses, customers, and community. Our team is dedicated like monks as our ethics are hard work and integrity. Show more Show less
Posted 6 days ago
7.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Manage and maintain infrastructure for development, testing, and deployment of applications Implement and manage CI/CD pipelines to automate build, test, and deployment processes Set up monitoring and logging systems to track performance and health of applications and infrastructure Collaborate with development teams to ensure infrastructure and deployment processes meet their needs Ensure infrastructure and deployment processes comply with security and regulatory requirements Automate repetitive tasks to improve efficiency and reduce risk of human error Optimize performance of applications and infrastructure to ensure efficiency and scalability Create and maintain documentation for infrastructure, deployment processes, and other relevant areas Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Technical Skills CI/CD Tools: Jenkins, GitHub Actions, Azure DevOps Configuration Management: Ansible, Puppet, Chef Containerization: Docker, Kubernetes Cloud Platforms: AWS, Azure, Google Cloud Monitoring and Logging: Prometheus, Grafana, ELK stack (Elasticsearch, Logstash, Kibana), Splunk Scripting Languages: Python, Bash, PowerShell Infrastructure as Code (IaC): Terraform, CloudFormation Version Control: Git Security Best Practices: Knowledge of security best practices and tools for securing the DevOps pipeline Collaboration Tools: Microsoft Teams, Slack Preferred Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, or related field Certifications in DevOps or related fields 7+ years of experience in DevOps roles with enterprise-scale impact Experience managing DevOps projects end-to-end Solid problem-solving and analytical skills Excellent communication skills for technical and non-technical audiences Knowledge of security and compliance standards At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission. #Gen Show more Show less
Posted 6 days ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About Gruve Gruve is an innovative software services startup dedicated to transforming enterprises to AI powerhouses. We specialize in cybersecurity, customer experience, cloud infrastructure, and advanced technologies such as Large Language Models (LLMs). Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As a well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks. Position Summary We are looking for Level 2 Engineer for our Device Management Team, having 4+ years of experience in Device Management. The team member will be responsible for the administration of the Security Devices like Next Gen Firewall. He should be well versed in the Change and Incident Management process. He should be ready to work in 24 X 7 shift. Key Roles & Responsibilities Firewall Operations & Troubleshooting: Monitor, manage, and optimize firewall performance, health, and security. Perform advanced troubleshooting of firewall-related issues, working closely with L1 and L3 teams. Investigate security incidents, network anomalies, and connectivity issues. Fine-tune firewall rules, policies, and configurations for performance and security improvements. Work on packet captures and traffic analysis to diagnose network security issues. Rule & Policy Management Review, implement, and modify firewall rules while ensuring security and compliance standards. Conduct firewall rule audits and remove redundant or obsolete rules. Manage access control policies, VPN configurations, and segmentation policies. Ensure compliance with organizational and regulatory security guidelines. Incident & Change Management Act as an escalation point for L1 engineers and assist in resolving complex firewall issues. Participate in Root Cause Analysis (RCA) for recurring firewall incidents and recommend fixes. Handle firewall upgrades, patches, and firmware updates as per change management procedures. Support IT security and compliance teams in conducting risk assessments and vulnerability remediation. BAU Operations & Automation Implement automation scripts for firewall monitoring and reporting. Optimize firewall logging, alerting, and monitoring for proactive security management. Coordinate with vendors and OEM support teams for hardware/software troubleshooting. Documentation & Reporting Maintain up-to-date documentation on firewall configurations, rule changes, and security incidents. Prepare reports on firewall performance, threat activities, and policy violations. Provide training and mentorship to L1 engineers on firewall best practices. Basic Qualifications Bachelor’s degree in IT, Computer Science, or a related field. Relevant security/network certifications such as CCNP Security, Fortinet NSE 3+, Palo Alto PCNSA/PCNSE, Check Point CCSA/CCSE, Juniper JNCIS-SEC, or equivalent. Strong verbal and written communication and interpersonal skills. Strong analytical and problem-solving abilities. Ability to work with minimal levels of supervision or oversight. Ability to work in a 24x7 environment (if required) and manage escalations effectively. Preferred Qualifications ITIL Foundation Experience in working in shifts and hands on experience with ITIL framework Worked on multiple clients’ environments and deployment Why Gruve At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you. Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted. Show more Show less
Posted 6 days ago
0.0 - 3.0 years
0 Lacs
India
On-site
Description GroundTruth is an advertising platform that turns real-world behavior into marketing that drives in-store visits and other real business results. We use observed real-world consumer behavior, including location and purchase data, to create targeted advertising campaigns across all screens, measure how consumers respond, and uncover unique insights to help optimize ongoing and future marketing efforts. With this focus on media, measurement, and insights, we provide marketers with tools to deliver media campaigns that drive measurable impact, such as in-store visits, sales, and more. Learn more at groundtruth.com. We believe that innovative technology starts with the best talent and have been ranked one of Ad Age’s Best Places to Work in 2021, 2022, 2023 & 2025! Learn more about the perks of joining our team here. About Team GroundTruth seeks an Associate Software Engineer to join our Reporting team. The Reporting Team at GroundTruth is responsible for designing, building, and maintaining data pipelines and dashboards that deliver actionable insights. We ensure accurate and timely reporting to drive data-driven decisions for advertisers and publishers. We take pride in building an Engineering Team composed of strong communicators who collaborate with multiple business and engineering stakeholders to find compromises and solutions. Our engineers are organised and detail-oriented team players who are problem solvers with a maker mindset. As an Associate Software Engineer (ASE) on our Integration Team, you will build solutions that add new capabilities to our platform. You Will Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies. Lead engineering efforts across multiple software components. Write excellent production code and tests, and help others improve in code reviews. Analyse high-level requirements to design, document, estimate, and build systems. Continuously improve the team's practices in code quality, reliability, performance, testing, automation, logging, monitoring, alerting, and build processes You Have B.Tech./B.E./M.Tech./MCA or equivalent in computer science 0-3 years of experience in Data Engineering Experience with AWS Stack used for Data engineering EC2, S3, Athena, Redshift, EMR, ECS, Lambda, and Step functions Experience in MapReduce, Spark, and Glue Hands-on experience with Java/Python for the orchestration of data pipelines and Data engineering tasks Experience in writing analytical queries using SQL Experience in Airflow Experience in Docker Proficient in Git How can you impress us? Knowledge of REST APIs The following skills/certifications: Python, SQL/MySQL, AWS, Git Additional nice-to-have skills/certifications: Flask, Fast API Knowledge of shell scripting. Experience with BI tools like Looker. Experience with DB maintenance Experience with Amazon Web Services and Docker Configuration management and QA practices Benefits At GroundTruth, we want our employees to be comfortable with their benefits so they can focus on doing the work they love. Parental leave- Maternity and Paternity Flexible Time Offs (Earned Leaves, Sick Leaves, Birthday leave, Bereavement leave & Company Holidays) In Office Daily Catered Breakfast, Lunch, Snacks and Beverages Health cover for any hospitalization. Covers both nuclear family and parents Tele-med for free doctor consultation, discounts on health checkups and medicines Wellness/Gym Reimbursement Pet Expense Reimbursement Childcare Expenses and reimbursements Employee referral program Education reimbursement program Skill development program Cell phone reimbursement (Mobile Subsidy program). Internet reimbursement/Postpaid cell phone bill/or both. Birthday treat reimbursement Employee Provident Fund Scheme offering different tax saving options such as Voluntary Provident Fund and employee and employer contribution up to 12% Basic Creche reimbursement Co-working space reimbursement National Pension System employer match Meal card for tax benefit Special benefits on salary account Show more Show less
Posted 6 days ago
2.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Description Looking for a highly skilled Microservices Engineer (.NET) to design, develop, and maintain scalable microservices-based architectures. You will play a crucial role in building high-performance, distributed, and cloud-native applications using .NET Core, Docker, Kubernetes, and modern DevOps practices. Requirements About the Role: Looking for a highly skilled Microservices Engineer (.NET) to design, develop, and maintain scalable microservices-based architectures. You will play a crucial role in building high-performance, distributed, and cloud-native applications using .NET Core, Docker, Kubernetes, and modern DevOps practices. Required Skills & Experience: 2-4 years NET Core/.NET 6+ development experience. Experience with microservices architecture & event-driven systems. Strong expertise in SQL/NoSQL databases (SQL Server, MongoDB, Redis, etc.). Hands-on experience with Docker & Kubernetes in cloud environments. Proficiency in RESTful APIs, gRPC, and API Gateways. Experience with message brokers (Kafka, RabbitMQ, Azure Service Bus). Strong understanding of authentication & authorization (OAuth2, JWT, Identity Server). Familiarity with DevOps, CI/CD tools, and Infrastructure as Code (Terraform, Helm). Preferred Skills: Experience with GraphQL. Familiarity with Domain-Driven Design (DDD) & Clean Architecture. Knowledge of serverless computing (AWS Lambda/Azure Functions). Exposure to monitoring & logging tools (Prometheus, ELK, Grafana). Job responsibilities Key Responsibilities: Design & Development: Develop scalable, secure, and high-performance microservices using .NET Core/.NET 6+. Build RESTful APIs and integrate with various third-party services. Implement event-driven architectures using Kafka, RabbitMQ, or Azure Service Bus. Cloud & DevOps: Deploy microservices on Azure/AWS/GCP using Docker & Kubernetes. Implement CI/CD pipelines with GitHub Actions, Azure DevOps, or Jenkins. Performance & Security: Ensure high availability and fault tolerance of microservices. Apply best security practices (OAuth, JWT, API Gateway security). Testing & Maintenance: Write unit, integration, and performance tests (xUnit, NUnit, Postman). Optimize services for latency, performance, and scalability. Collaboration & Documentation: Work closely with frontend, DevOps, and data engineers. Document microservice design and APIs using Swagger/OpenAPI. What we offer Culture of caring. At GlobalLogic, we prioritize a culture of caring. Across every region and department, at every level, we consistently put people first. From day one, you’ll experience an inclusive culture of acceptance and belonging, where you’ll have the chance to build meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. Learning and development. We are committed to your continuous learning and development. You’ll learn and grow daily in an environment with many opportunities to try new things, sharpen your skills, and advance your career at GlobalLogic. With our Career Navigator tool as just one example, GlobalLogic offers a rich array of programs, training curricula, and hands-on opportunities to grow personally and professionally. Interesting & meaningful work. GlobalLogic is known for engineering impact for and with clients around the world. As part of our team, you’ll have the chance to work on projects that matter. Each is a unique opportunity to engage your curiosity and creative problem-solving skills as you help clients reimagine what’s possible and bring new solutions to market. In the process, you’ll have the privilege of working on some of the most cutting-edge and impactful solutions shaping the world today. Balance and flexibility. We believe in the importance of balance and flexibility. With many functional career areas, roles, and work arrangements, you can explore ways of achieving the perfect balance between your work and life. Your life extends beyond the office, and we always do our best to help you integrate and balance the best of work and life, having fun along the way! High-trust organization. We are a high-trust organization where integrity is key. By joining GlobalLogic, you’re placing your trust in a safe, reliable, and ethical global company. Integrity and trust are a cornerstone of our value proposition to our employees and clients. You will find truthfulness, candor, and integrity in everything we do. About GlobalLogic GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner to the world’s largest and most forward-thinking companies. Since 2000, we’ve been at the forefront of the digital revolution – helping create some of the most innovative and widely used digital products and experiences. Today we continue to collaborate with clients in transforming businesses and redefining industries through intelligent products, platforms, and services. Show more Show less
Posted 6 days ago
8.0 - 11.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Company Description About Sopra Steria Sopra Steria, a major Tech player in Europe with 50,000 employees in nearly 30 countries, is recognised for its consulting, digital services and solutions. It helps its clients drive their digital transformation and obtain tangible and sustainable benefits. The Group provides end-to-end solutions to make large companies and organisations more competitive by combining in-depth knowledge of a wide range of business sectors and innovative technologies with a collaborative approach. Sopra Steria places people at the heart of everything it does and is committed to putting digital to work for its clients in order to build a positive future for all. In 2024, the Group generated revenues of €5.8 billion. Job Description The world is how we shape it. Position: Snowflake - Senior Technical Lead Experience: 8-11 years Location: Noida/ Bangalore Education: B.E./ B.Tech./ MCA Primary Skills: Snowflake, Snowpipe, SQL, Data Modelling, DV 2.0, Data Quality, AWS, Snowflake Security Good to have Skills: Snowpark, Data Build Tool, Finance Domain Experience with Snowflake-specific features: Snowpipe, Streams & Tasks, Secure Data Sharing. Experience in data warehousing, with at least 2 years focused on Snowflake. Hands-on expertise in SQL, Snowflake scripting (JavaScript UDFs), and Snowflake administration. Proven experience with ETL/ELT tools (e.g., dbt, Informatica, Talend, Matillion) and orchestration frameworks. Deep knowledge of data modeling techniques (star schema, data vault) and performance tuning. Familiarity with data security, compliance requirements, and governance best practices. Experience in Python, Scala, or Java for Snowpark development is good to have. Strong understanding of cloud platforms (AWS, Azure, or GCP) and related services (S3, ADLS, IAM) Key Responsibilities Define data partitioning, clustering, and micro-partition strategies to optimize performance and cost. Lead the implementation of ETL/ELT processes using Snowflake features (Streams, Tasks, Snowpipe). Automate schema migrations, deployments, and pipeline orchestration (e.g., with dbt, Airflow, or Matillion). Monitor query performance and resource utilization; tune warehouses, caching, and clustering. Implement workload isolation (multi-cluster warehouses, resource monitors) for concurrent workloads. Define and enforce role-based access control (RBAC), masking policies, and object tagging. Ensure data encryption, compliance (e.g., GDPR, HIPAA), and audit logging are correctly configured. Establish best practices for dimensional modeling, data vault architecture, and data quality. Create and maintain data dictionaries, lineage documentation, and governance standards. Partner with business analysts and data scientists to understand requirements and deliver analytics-ready datasets. Stay current with Snowflake feature releases (e.g., Snowpark, Native Apps) and propose adoption strategies. Contribute to the long-term data platform roadmap and cloud cost-optimization initiatives. Qualifications BTech/MCA Additional Information At our organization, we are committed to fighting against all forms of discrimination. We foster a work environment that is inclusive and respectful of all differences. All of our positions are open to people with disabilities. Show more Show less
Posted 6 days ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: AI/ML Developer Duration: 6 months (Expected to be longer) Work Location & Requirement: Chennai, onsite at least 3-4 days a week Position Summary: We are seeking a highly skilled and motivated Development Lead with deep expertise in ReactJS, Python, and AI/ML DevOps, along with working familiarity in AWS cloud services. This is a hands-on individual contributor role focused on developing and deploying a full-stack AI/ML-powered web application. The ideal candidate should be passionate about building intelligent, user-centric applications and capable of owning the end-to-end development process. Position Description: Design and develop intuitive and responsive web interfaces using ReactJS. Build scalable backend services and RESTful APIs using Python frameworks (e.g., Flask, FastAPI, or Django). Integrate AI/ML models into the application pipeline and support inferencing, monitoring, and retraining flows. Automate development workflows and model deployments using DevOps best practices and tools (Docker, CI/CD, etc.). Deploy applications and ML services on AWS infrastructure, leveraging services such as EC2, S3, Lambda, SageMaker, and EKS. Ensure performance, security, and reliability of the application through testing, logging, and monitoring. Collaborate with data scientists, designers, and product stakeholders to refine and implement AI-powered features. Take ownership of application architecture, development lifecycle, and release management. Minimum Requirements: Bachelor’s or Master’s degree in computer science, Engineering, or a related field. 8+ years of hands-on experience in software development. Strong expertise in ReactJS, and NodeJS based web application development. Proficient in Python for backend development and AI/ML model integration. Experience with at least one AI/ML framework (LLMs). Solid understanding of DevOps concepts for ML workflows – containerization, CI/CD, testing, and monitoring. Experience deploying and operating applications in AWS cloud environments. Self-driven, with excellent problem-solving skills and attention to detail. Strong communication skills and ability to work independently in an agile, fast-paced environment. Show more Show less
Posted 6 days ago
10.0 years
0 Lacs
India
Remote
Empowering enterprises to keep the planet habitable for all, Terrascope aspires to be the easiest carbon measurement and decarbonization platform for companies in the land, nature, and net-zero economy sectors. Terrascope is a leading decarbonisation software platform designed specifically for the Land, Nature (LAN), and the Net-Zero Economy (NZE). As the easiest-to-use platform for these sectors, our comprehensive solution blends deep industry expertise with advanced climate science, data science, and machine learning. Terrascope enables companies to effectively manage emissions across their supply chains. Our integrated platform offers solutions for Product and Corporate Carbon Footprinting, addressing Scope 3 and land-based emissions, SBTi FLAG & GHG Protocol LSR reporting, and supporting enterprise decarbonisation goals. Publicly launched in June 2022, Terrascope works with customers across sectors, from agriculture, food & beverages, manufacturing, retail and luxury, to transportation, real estate, and TMT. Terrascope is globally headquartered in Singapore and operates in major markets across APAC, North America, and EMEA. Terrascope is a partner of the Monetary Authority of Singapore’s ESG Impact Hub, a CDP Gold Accredited software provider, has been independently assured by Ernst & Young, and a signatory of The Climate Pledge to achieve Net Zero by 2040. We are looking for an Engineering Manager who combines deep technical expertise with strong leadership and management skills. You will lead the engineering team to deliver high-quality, scalable, and innovative software solutions. You will set the technical vision, guide the development of our SaaS product, and ensure that the engineering team collaborates effectively across product, design, and other functions. This role will report into the Senior Director of Product, Tech and Data. In this role, you will drive: Technical vision and strategy: Make technical decisions and advise the team on the software development approach to take, considering trade-offs between the approaches and mitigating known drawbacks Team leadership and development: Lead, mentor, and grow a diverse team of engineers across frontend, backend, DevSecOps, data, and analytics. Foster a culture of continuous learning, experimentation, and improvement. Identify skill gaps and oversee team training and recruitment efforts Cross-functional collaboration: Work closely with Product, Customer Success, Sustainability, and Solution Engineering to align on feature prioritization and roadmap execution. Collaborate with Design to ensure a seamless user experience with intuitive user flows and consistent design throughout the product Software development and project management: Plan and manage the engineering scope, schedules, and quality for each feature and release. Ensure timely and efficient execution across the product lifecycle Technical excellence and delivery: Build and maintain a scalable, high-quality codebase, enforcing coding standards, testing practices, and quality assurance processes, and managing technical debt. Conduct peer code reviews and provide hands-on technical guidance. Streamline workflows by integrating software engineering, quality assurance, DevSecOps, and MLOps frameworks, guidelines and processes into software development activities Security and Compliance: Ensure secure coding and existing software development lifecycle guidelines and standards are met. Work with Legal and IT support to ensure the platform is secure and complies with security standards such as ISO 27001 and SOC Monitoring and Incident Management: Set up advanced logging and alerting systems to detect and resolve bugs proactively. Develop an incident response plan and lead technical resolution when incidents occur to minimise downtime and impact to customers Infrastructure and Performance: Design and maintain an architecture that supports scalable growth and high availability. Effectively manage cloud infrastructure, and develop strategies to minimize cloud spending, optimizing resource allocation and infrastructure costs. Track and optimise SaaS metrics, such as uptime, response time, latency, and system performance, ensuring a seamless and efficient experience for customers Process improvement and innovation: Evaluate and select third-party tools, platforms, and services (e.g., databases, monitoring tools, security services) that can help accelerate development while maintaining cost-effectiveness. Stay up to date with industry trends, new technologies, and best practices to improve software development processes. Identify opportunities to optimize processes and implement best practices for industrialising AI and machine learning To succeed in this role, you need to have/ be: Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, or a related field Minimum 10+ years of experience in software development, with at least 3 years managing high-performing distributed engineering teams Prior experience in startups and working with remote teams Experience developing a B2B/ Enterprise-grade SaaS platform Hands-on technical skills and good competency in the following: (1) JavaScript, React and NodeJS to build scalable and modular web applications using module federations technique; (2) Server-side programming / building scalable web apps in Python; (3) Database technologies such as Microsoft SQL, Oracle, PostgreSQL, and MongoDB; and (4) Container orchestration technologies such as Docker, Kubernetes Familiarity with cloud services, especially in AWS Experience with Agile methodologies Excellent communication and stakeholder engagement skills Data-driven with strong analytical and prioritization skills A self-starter with a growth-mindset and proactiveness in working independently to drive toward achieving results Good to have 🎉: An entrepreneurial problem solver comfortable in managing risk and ambiguity Experience in AI-related technologies, AI/ML, Data Science, and scaling data science components in production Proven track record of people management and being able to develop individuals’ careers and motivation to achieve team’s outcomes Your Privacy and Fairness in Our Recruitment Process We are committed to protecting your data, ensuring fairness, and adhering to workplace fairness principles in our recruitment process. To enhance hiring efficiency and minimize bias, we use AI-powered tools to assist with tasks such as resume screening and candidate matching. These tools are designed and deployed in compliance with internationally recognized AI governance frameworks. We continuously validate our systems to uphold fairness, transparency, and accountability, ensuring they do not result in unfair or discriminatory outcomes. Your personal data is handled securely and transparently, and final hiring decisions are made by our recruitment team to ensure a human-centered approach. If you have questions about how your data is processed or wish to report concerns about fairness, please contact us at careers@terrascope.com. We're committed to creating an inclusive environment for our strong and diverse team. We value diversity and foster a community where everyone can be their authentic self. Show more Show less
Posted 6 days ago
2.0 - 4.0 years
0 Lacs
Kochi, Kerala, India
On-site
Key Responsibilities Design, implement, and manage GCP infrastructure using Terraform, Deployment Manager, or similar IaC tools. Develop and maintain CI/CD pipelines using tools like Jenkins, GitLab CI/CD, GitHub Actions, or Cloud Build. Automate provisioning, configuration management, and deployments with Terraform, Ansible, or Chef/Puppet. Monitor system performance, reliability, and security using Prometheus, Grafana, Stackdriver, or Datadog. Manage and optimize Kubernetes (GKE) clusters, ensuring efficient container orchestration. Implement best practices for cloud security, IAM policies, VPC configurations, and networking. Troubleshoot and resolve system and network issues to ensure high availability and performance. Collaborate with development teams to implement infrastructure and deployment processes. Work with logging and monitoring tools like Google Cloud Logging, Cloud Monitoring, and ELK Stack. Implement disaster recovery and backup strategies for cloud infrastructure. Skills Needed: Expertise in Google Cloud Platform (GCP) services like Compute Engine, GKE, Cloud Storage, Cloud Functions, Cloud SQL, BigQuery, and more. Excellent communication and presentation skills are essential. Hands-on experience with Infrastructure as Code (IaC) using Terraform, Cloud Deployment Manager, or Pulumi. Experience with containerization and orchestration using Docker and Kubernetes (GKE preferred). Proficiency in scripting languages such as Bash. Experience with CI/CD tools like Jenkins, GitLab CI, ArgoCD, or Cloud Build. Strong understanding of networking, VPC, load balancing, firewalls, and DNS in cloud environments. Knowledge of logging, monitoring, and alerting tools (Prometheus, Stackdriver, ELK, Datadog). Experience with security best practices, including IAM, OAuth, and compliance frameworks. Excellent problem-solving and troubleshooting skills. Add-ons Knowledge of other cloud platforms (AWS/Azure) Experience 2 - 4 years of experience in DevOps Apply Now → Show more Show less
Posted 6 days ago
3.0 years
0 Lacs
Kochi, Kerala, India
On-site
Key Responsibilities Ensure infrastructure scalability, reliability, and security by implementing infrastructure as code (IaC) principles. Collaborate with software developers, quality assurance engineers, and IT professionals to guarantee smooth deployment, automation, and software infrastructure management. Design and implement CI/CD pipelines for multiple software applications and environments. Automate and streamline deployment processes, minimizing manual intervention and improving system efficiency. Stay up-to-date with industry trends and emerging technologies, assessing their potential impact and recommending adoption where appropriate. Troubleshoot software infrastructure issues and collaborate with the team to resolve them. Implement monitoring, alerting, and logging systems to ensure proactive management of cloud environments. Ensure transparent communication with the customer Ensure compliance with organizational security policies and industry standards Document incident details, analyse root causes, and implement preventive measures. Skill Set Acquaintance with software development processes and methodologies Experience with cloud infrastructure platforms such as AWS, Azure and GCP Excellent scripting skills in Bash, Python or Ruby Strong problem-solving and troubleshooting skills, with the ability to identify root causes and implement effective solutions. Proficiency in configuration management tools such as Ansible, Chef, or Puppet Knowledge of security best practices and the ability to implement security controls at the infrastructure level. Experience in monitoring tools such as Zabbix, Nagios, etc Hands-on knowledge of Linux fundamentals, System administration, performance tuning, etc Good knowledge of networking, routing and switching Communication and documentation skills Knowledge of containerization tools such as Docker, Kubernetes, etc Technologies Zabbix, Nagios Containerization AWS, Azure, GCP Jenkins Sonarqube Ansible, Chef, Puppet, etc Experience 2 – 3 years Apply Now → Show more Show less
Posted 6 days ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Our Team: We’re a fast-growing, culture-focused, venture-backed startup dedicated to building products that re-imagine an organization's Business Intelligence environment. As a team, we practice what we preach: we work hard to cultivate an environment in which people feel comfortable bringing their full selves to work every day. We are seeking a Backend Engineer to join our engineering team and help us develop highly available, reliable, and maintainable APIs and containerized BI DevOps tasks for our customers. The ideal candidate will be responsible for deploying solutions that address common issues in BI environments such as downtime, stale data, unused reports and datasets, duplicate datasets, unendorsed datasets, sensitive data, overconsumption of resources, and unused premium features. Responsibilities Develop and maintain APIs and systems for performing CI/CD tasks related to BI metadata at scale. Address common issues such as downtime, stale data, unused reports, and datasets, duplicate datasets, unendorsed datasets, sensitive data, overconsumption of resources, and unused premium features. Collaborate with the product and internal stakeholders to define project requirements and ensure timelines are met. Lead projects aimed at improving the BI environment, including migrations, cost savings, performance optimizations, license management, consumption optimization, workflow usage, and warehouse management. Write clean, high-quality, high-performance, and maintainable code. Utilize modern DevOps tools for continuous integration, configuration management, container orchestration, monitoring and logging. Become an expert on metadata tracking across multiple BI platforms Requirements Bachelor's degree in Computer Science or a related field 5+ years of relevant work experience Strong proficiency in Python Git for version control AWS experienced preferred for cloud infrastructure Ability to create and iterate upon products based on customer learnings and metrics around user/customer behavior Excellent communication skills with both technical and non-technical audiences Extremely organized with excellent attention to detail Collaborative attitude and desire to contribute outside primary areas of responsibility Nice To Have Previous experience at a B2B SaaS venture-backed start-up is preferred Experience working with large BI environments is a plus Show more Show less
Posted 6 days ago
5.0 years
15 - 20 Lacs
Ahmedabad, Gujarat, India
On-site
Experience : 5.00 + years Salary : INR 1500000-2000000 / year (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Hybrid (Ahmedabad) Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Inferenz) What do you need for this opportunity? Must have skills required: ML model deployment, MLOps, Monitoring Inferenz is Looking for: Job Description: Position: Sr. MLOps Engineer Location: Ahmedabad, Pune Required Experience: 5+ Years of experience Preferred: Immediate Joiners Job Overview: Building the machine learning production infrastructure (or MLOps) is the biggest challenge most large companies currently have in making the transition to becoming an AI-driven organization. We are looking for a highly skilled MLOps Engineer to join our team. As an MLOps Engineer, you will be responsible for designing, implementing, and maintaining the infrastructure that supports the deployment, monitoring, and scaling of machine learning models in production. You will work closely with data scientists, software engineers, and DevOps teams to ensure seamless integration of machine learning models into our production systems. The job is NOT for you if: You don’t want to build a career in AI/ML. Becoming an expert in this technology and staying current will require significant self-motivation. You like the comfort and predictability of working on the same problem or code base for years. The tools, best practices, architectures, and problems are all going through rapid change — you will be expected to learn new skills quickly and adapt. Key Responsibilities: Model Deployment: Design and implement scalable, reliable, and secure pipelines for deploying machine learning models to production. Infrastructure Management: Develop and maintain infrastructure as code (IaC) for managing cloud resources, compute environments, and data storage. Monitoring and Optimization: Implement monitoring tools to track the performance of models in production, identify issues, and optimize performance. Collaboration: Work closely with data scientists to understand model requirements and ensure models are production ready. Automation: Automate the end-to-end process of training, testing, deploying, and monitoring models. Continuous Integration/Continuous Deployment (CI/CD): Develop and maintain CI/CD pipelines for machine learning projects. Version Control: Implement model versioning to manage different iterations of machine learning models. Security and Governance: Ensure that the deployed models and data pipelines are secure and comply with industry regulations. Documentation: Create and maintain detailed documentation of all processes, tools, and infrastructure. Qualifications: 5+ years of experience in a similar role (DevOps, DataOps, MLOps, etc.) Bachelor’s or master’s degree in computer science, Engineering, or a related field. Experience with cloud platforms (AWS, GCP, Azure) and containerization (Docker, Kubernetes) Strong understanding of machine learning lifecycle, data pipelines, and model serving. Proficiency in programming languages such as Python, Shell scripting, and familiarity with ML frameworks (TensorFlow, PyTorch, etc.). Exposure to deep learning approaches and modeling frameworks (PyTorch, Tensorflow, Keras, etc.) Experience with CI/CD tools like Jenkins, GitLab CI, or similar Experience building end-to-end systems as a Platform Engineer, ML DevOps Engineer, or Data Engineer (or equivalent) Strong software engineering skills in complex, multi-language systems Comfort with Linux administration Experience working with cloud computing and database systems Experience building custom integrations between cloud-based systems using APIs Experience developing and maintaining ML systems built with open-source tools Experience developing with containers and Kubernetes in cloud computing environments Familiarity with one or more data-oriented workflow orchestration frameworks (MLFlow, KubeFlow, Airflow, Argo, etc.) Ability to translate business needs to technical requirements Strong understanding of software testing, benchmarking, and continuous integration Exposure to machine learning methodology and best practices Understanding of regulatory requirements for data privacy and model governance. Preferred Skills: Excellent problem-solving skills and ability to troubleshoot complex production issues. Strong communication skills and ability to collaborate with cross-functional teams. Familiarity with monitoring and logging tools (e.g., Prometheus, Grafana, ELK Stack). Knowledge of database systems (SQL, NoSQL). Experience with Generative AI frameworks Preferred cloud-based or MLOps/DevOps certification (AWS, GCP, or Azure) How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 6 days ago
5.0 - 8.0 years
0 Lacs
Greater Kolkata Area
On-site
ERM is seeking Senior Technical Consultants with deep expertise in systems integration, data migration, middleware technologies and advanced configuration on leading EHSS platforms like Nabsic. The ideal candidate will bring both technical depth and solution-oriented consulting experience , enabling clients to achieve seamless inter-system operability and digital transformation goals in their Environment, Health, Safety and Sustainability (EHSS) landscape. You will work in an environment that encourages innovation, cross-functional collaboration and excellence in technical delivery to consistently deliver solutions that are robust, scalable and meet complex business requirements. 1.1.1 Responsibilities: Act as the technical lead for EHSS system integrations and configuration initiatives, supporting client needs across a wide range of technical scenarios. Lead and manage integration design, development and deployment, enabling systems to communicate via REST/SOAP APIs, middleware, file-based connectors, or other integration methods. Design and execute data migration strategies, including ETL, bulk data operations, validation and reconciliation across legacy and new systems. Support advanced Nabsic configurations including Forms, Rules, Scripts, Approval Processes and Data Models. Manage technical discussions with client IT teams, vendors and implementation partners, ensuring alignment and interoperability across system architectures. Analyze and troubleshoot integration errors, performance bottlenecks and deployment issues using appropriate monitoring and logging tools. Participate in technical workshops, UAT, solution design reviews and provide subject matter expertise throughout the implementation lifecycle. Create and maintain detailed solution design documents, integration architecture diagrams and configuration specs. Support EHSS reporting requirements through data extraction, transformation and interfacing with reporting systems (e.g., Power BI, Tableau). Collaborate across time zones with functional consultants, developers and SMEs to deliver high-quality technical solutions on-time and within budget. Drive adherence to SLAs and quality standards during project delivery and ongoing support. Maintain technical documentation and ensure knowledge transition to support teams and client IT. 1.1.2 Requirements: Bachelor’s Degree in Computer Science, Information Technology, Engineering or related technical discipline. 5 to 8 years of relevant technical experience in system implementation, integration and support within EHSS domains. Platform Expertise: Enablon, Nabsic (Forms, Rules, Scripts, Workflows), SAP RE-FX System Integration & Middleware: Strong hands on experience with REST/SOAP APIs, MuleSoft, Azure Logic Apps, Dell Boomi Data Migration & ETL: Source mapping, transformation, validation, reconciliation Dev Tools & Monitoring: Postman, Swagger, Git, JIRA, ServiceNow Backend & Scripting: JavaScript, Python, SQL, JSON, XML Frontend Development: Vue.js, React.js, jQuery, HTML5, CSS3, Bootstrap Security & Auth: OAuth2, SAML, API Keys Project Management: Agile methodologies, cross-functional collaboration SQL Server & Oracle: Advanced database development, performance tuning, and integration .NET Framework: Extensive experience with C#, .NET 2.0–8.0, Windows Forms, and Windows Services Reporting & Analytics: Power BI, Tableau Exposure to EHSS platforms such as Enablon, Sphera, Cority, Intelex, SAP, Workiva, Salesforce or Benchmark Gensuite is a plus. Strong written and verbal communication skills to interact effectively with clients, vendors and internal teams. Ability to work independently and manage priorities in a dynamic, fast-paced environment. Willingness to travel as needed for client engagements. 1.1.3 Relevant Information: Industry: Sustainability Consulting Services Functional Area: Technical Delivery & System Integration Role: Senior Technical Consultant – Integrations & Middleware Career Level: CL2 / CL3 Number of Vacancies: One Location: Bengaluru, India 1.1.4 Education: BE/B.Tech/MCA – Preferred in Computer Science, Information Technology, or related technical stream. Show more Show less
Posted 6 days ago
5.0 years
0 Lacs
India
On-site
Required Skills & Experience - 5+ years in detection engineering, threat hunting, or security operations. - Deep expertise with CrowdStrike Falcon Endpoint, Next-Gen SIEM, CS IDP, FUSION, and SOAR platforms. - Strong experience with cloud security (AWS, Azure). - Proficiency in CrowdStrike Query Language (FQL/CQL) and scripting (Python, PowerShell). - Proven ability to troubleshoot CrowdStrike sensor issues, agent health, and platform integration. - Familiarity with MITRE ATT&CK, NIST 800-53, and modern detection frameworks. - Expertise in CRBL and/or CRBL-like data optimization tools Nice to Have Skills & Experience - CrowdStrike certifications (e.g., CCFA, CCFH) - Experience with threat intelligence platforms and adversary emulation. - Familiarity with CI/CD pipelines, detection-as-code, and infrastructure-as-code practices. Job Description We are seeking a highly experienced Senior Detection Engineer to lead the development and optimization of advanced threat detection and response capabilities. This role requires deep expertise in CrowdStrike Falcon Endpoint, Next-Gen SIEM, CS Identity Protection (IDP), FUSION, SOAR platforms, and cloud security. The ideal candidate will serve as the subject matter expert (SME) for the entire CrowdStrike ecosystem, including sensor deployment, troubleshooting, automation, and query development. Key Responsibilities - Develop and maintain high-fidelity detection rules using CrowdStrike Falcon, Next-Gen SIEM, and FUSION. - Leverage CS IDP to detect identity-based threats and lateral movement. - Write and optimize queries using CrowdStrike Query Language (FQL/CQL) for threat hunting and detection validation. - Build and tune detections for cloud environments (AWS, Azure, GCP) and integrate with cloud-native logging tools. - Function as the primary SME for CrowdStrike, including Falcon, IDP, FUSION, and related modules. - Troubleshoot and resolve sensor deployment issues, agent health problems, and telemetry gaps. - Serve as the escalation point for CrowdStrike-related errors, automation failures, and detection tuning. - Design and implement automated response playbooks using SOAR platforms to reduce dwell time and automate/streamline triage. - Conduct threat modeling for enterprise systems, cloud platforms, and business-critical applications. Compensation : 30 LPA to 40 LPA Exact compensation may vary based on several factors, including skills, experience, and education. Benefit packages for this role will start on the 31st day of employment and include medical, dental, and vision insurance, as well as HSA, FSA, and DCFSA account options, and 401k retirement account access with employer matching. Employees in this role are also entitled to paid sick leave and/or other paid time off as provided by applicable law. Show more Show less
Posted 6 days ago
3.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Technology @Dream11: Technology is at the core of everything we do. Our technology team helps us deliver a mobile-first experience across platforms (Android & iOS) while managing over 700 million rpm (requests per minute) at peak with a user concurrency of over 16.5 million. We have over 190+ micro-services written in Java and backed by a Vert.x framework. These work with isolated product features with discrete architectures to cater to the respective use cases. We work with terabytes of data, the infrastructure for which is built on top of Kafka, Redshift, Spark, Druid, etc. and it powers a number of use cases like Machine Learning and Predictive Analytics. Our tech stack is hosted on AWS, with distributed systems like Cassandra, Aerospike, Akka, Voltdb, Ignite, etc. Your Role: Analyze requirements and design software solutions basis first design principles (e.g. Object Oriented Design and Analysis, E-R Modeling) Build resilient, event-driven microservices using reactive Java based framework, sql and no-sql datastores, caches, messaging and big-data processing frameworks Deploy and configure cloud-native software services on public cloud Operate and support software services in production based on on-call schedules, using observability tools such as Datadog for logging, alerting, monitoring Qualifiers: 3+ years coding experience with at least one object oriented programming language, preferably Java, relational databases, database modeling (E-R modeling), SQL Familiarity with no-SQL databases and caching frameworks preferred Working experience of messaging frameworks such as Kafka or MQ Familiarity with object oriented design patterns Working experience with AWS or any cloud infrastructure About Dream Sports: Dream Sports is India’s leading sports technology company with 250 million users, housing brands such as Dream11 , the world’s largest fantasy sports platform, FanCode , a premier sports content & commerce platform and DreamSetGo , a sports experiences platform. Dream Sports is based in Mumbai and has a workforce of close to 1,000 ‘Sportans’. Founded in 2008 by Harsh Jain and Bhavit Sheth, Dream Sports’ vision is to ‘Make Sports Better’ for fans through the confluence of sports and technology. For more information: https://dreamsports.group/ Dream11 is the world’s largest fantasy sports platform with 230 million users playing fantasy cricket, football, basketball & hockey on it. Dream11 is the flagship brand of Dream Sports, India’s leading Sports Technology company and has partnerships with several national & international sports bodies and cricketers. Show more Show less
Posted 6 days ago
0 years
0 Lacs
Pune, Maharashtra, India
Remote
Entity: Finance Job Family Group: Business Support Group Job Description: We are a global energy business involved in every aspect of the energy system. We are working towards delivering light, heat, and mobility to millions of people every day. We are one of the very few companies equipped to solve some of the big complex challenges that matter for the future. We have a real contribution to make to the world's ambition of a low-carbon future. Join us and be part of what we can accomplish together. You can participate in our new ambition to become a net zero company by 2050 or sooner and help the world get to net zero. Would you like to discover how our diverse, hardworking people are leading the way in making energy cleaner and better – and how you can play your part in our world-class team? Job Purpose The Customer Service Representative role exists to provide first and second line of customer support to telephone and written enquiries from external and internal Customers and other Consumers in accordance with agreed service levels. The position will ensure all facing queries are answered with the required speed, accuracy and with the maximum level of customer happiness. Customer Facing CSRs are required to have a broad understanding of all Customer Service processes to enable a high percentage of first contact resolution and will continually handle customer expectations through various contact channels. CSRs are the first point of contact for BP telephone-based enquiries. Key Accountabilities Implement day to day customer service-related operational tasks to ensure delivery meets customer expectations and is consistent with set process performance indicators, applicable service level agreements and the customer service functions core values. Leverage understanding of specific processes / systems and act as the first and second point of contact for any verbal or written form of enquiries from external customers and consumers and internal customers from the BP Business and third parties. These customers will include retail fueling sites, branded and unbranded customers, commercial and strategic accounts and terminals. Provide customer service via the internet, phone, fax and email to support activities including: Account set-up, allocation and delivery issues. Order processing and order fulfilment. Sales order tracking. Supervise supply outages and react accordingly for incoming and existing orders. Retail marketing programme information, policy and product fulfilment. Retail site experience complaints, fuel quality claims, site locator etc. Complaint resolution, identification and management of complaint root causes. Log, assign and supervise progress of queries and customer requests from receipt to completion ensuring data is accurately entered and maintained in all customer service and data collection systems. Support FBT activities through immediate triage, partner concern (high risk customer issues - financial, legal, reputation), resolution or logging and forwarding of customer inquiries / issues. Service Management & Continuous Improvement Manage and maintain customer expectations, referencing pre-established service level agreements where applicable. Make recommendations on existing knowledge base documents and identify knowledge gaps. Build and maintain positive relationships with both the customer and internal business partn6ers through the provision of timely, accurate and high-quality service. Highlight process gaps and inefficiencies; proactively seek solutions to increase productivity and / or level of service provided. Education, Experience Graduation standard or equivalent Minimum of 12 months previous experience customer service skills in a telephone environment and or customer services environment preferred. Travel Requirement Negligible travel should be expected with this role Relocation Assistance: This role is eligible for relocation within country Remote Type: This position is a hybrid of office/remote working Skills: Agility core practices, Analytical Thinking, Business process improvement, Commercial Acumen, Communication, Conflict Management, Creativity and Innovation, Customer centric thinking, Customer enquiries, Customer experience, Customer value proposition, Digital fluency, Resilience, Sustainability awareness and action, Understanding Emotions, Workload Prioritization Legal Disclaimer: We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, socioeconomic status, neurodiversity/neurocognitive functioning, veteran status or disability status. Individuals with an accessibility need may request an adjustment/accommodation related to bp’s recruiting process (e.g., accessing the job application, completing required assessments, participating in telephone screenings or interviews, etc.). If you would like to request an adjustment/accommodation related to the recruitment process, please contact us. If you are selected for a position and depending upon your role, your employment may be contingent upon adherence to local policy. This may include pre-placement drug screening, medical review of physical fitness for the role, and background checks. Show more Show less
Posted 6 days ago
2.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Company Qualcomm India Private Limited Job Area Engineering Group, Engineering Group > Software Engineering General Summary As a leading technology innovator, Qualcomm pushes the boundaries of what's possible to enable next-generation experiences and drives digital transformation to help create a smarter, connected future for all. As a Qualcomm Software Engineer, you will design, develop, create, modify, and validate embedded and cloud edge software, applications, and/or specialized utility programs that launch cutting-edge, world class products that meet and exceed customer needs. Qualcomm Software Engineers collaborate with systems, hardware, architecture, test engineers, and other teams to design system-level software solutions and obtain information on performance requirements and interfaces. Minimum Qualifications Bachelor's degree in Engineering, Information Systems, Computer Science, or related field and 2+ years of Software Engineering or related work experience. OR Master's degree in Engineering, Information Systems, Computer Science, or related field and 1+ year of Software Engineering or related work experience. OR PhD in Engineering, Information Systems, Computer Science, or related field. 2+ years of academic or work experience with Programming Language such as C, C++, Java, Python, etc. Job Title: MLOps Engineer - ML Platform Hiring Title: Flexible based on candidate experience – about Staff Engineer preferred Job Description We are seeking a highly skilled and experienced MLOps Engineer to join our team and contribute to the development and maintenance of our ML platform both on premises and AWS Cloud. As a MLOps Engineer, you will be responsible for architecting, deploying, and optimizing the ML & Data platform that supports training of Machine Learning Models using NVIDIA DGX clusters and the Kubernetes platform, including technologies like Helm, ArgoCD, Argo Workflow, Prometheus, and Grafana. Your expertise in AWS services such as EKS, EC2, VPC, IAM, S3, and EFS will be crucial in ensuring the smooth operation and scalability of our ML infrastructure. You will work closely with cross-functional teams, including data scientists, software engineers, and infrastructure specialists, to ensure the smooth operation and scalability of our ML infrastructure. Your expertise in MLOps, DevOps, and knowledge of GPU clusters will be vital in enabling efficient training and deployment of ML models. Responsibilities Will Include Architect, develop, and maintain the ML platform to support training and inference of ML models. Design and implement scalable and reliable infrastructure solutions for NVIDIA clusters both on premises and AWS Cloud. Collaborate with data scientists and software engineers to define requirements and ensure seamless integration of ML and Data workflows into the platform. Optimize the platform’s performance and scalability, considering factors such as GPU resource utilization, data ingestion, model training, and deployment. Monitor and troubleshoot system performance, identifying and resolving issues to ensure the availability and reliability of the ML platform. Implement and maintain CI/CD pipelines for automated model training, evaluation, and deployment using technologies like ArgoCD and Argo Workflow. Implement and maintain monitoring stack using Prometheus and Grafana to ensure the health and performance of the platform. Manage AWS services including EKS, EC2, VPC, IAM, S3, and EFS to support the platform. Implement logging and monitoring solutions using AWS CloudWatch and other relevant tools. Stay updated with the latest advancements in MLOps, distributed computing, and GPU acceleration technologies, and proactively propose improvements to enhance the ML platform. What Are We Looking For Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Proven experience as an MLOps Engineer or similar role, with a focus on large-scale ML and/or Data infrastructure and GPU clusters. Strong expertise in configuring and optimizing NVIDIA DGX clusters for deep learning workloads. Proficient in using the Kubernetes platform, including technologies like Helm, ArgoCD, Argo Workflow, Prometheus, and Grafana. Solid programming skills in languages like Python, Go and experience with relevant ML frameworks (e.g., TensorFlow, PyTorch). In-depth understanding of distributed computing, parallel computing, and GPU acceleration techniques. Familiarity with containerization technologies such as Docker and orchestration tools. Experience with CI/CD pipelines and automation tools for ML workflows (e.g., Jenkins, GitHub, ArgoCD). Experience with AWS services such as EKS, EC2, VPC, IAM, S3, and EFS. Experience with AWS logging and monitoring tools. Strong problem-solving skills and the ability to troubleshoot complex technical issues. Excellent communication and collaboration skills to work effectively within a cross-functional team. We Would Love To See Experience with training and deploying models. Knowledge of ML model optimization techniques and memory management on GPUs. Familiarity with ML-specific data storage and retrieval systems. Understanding of security and compliance requirements in ML infrastructure. Applicants : Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail disability-accomodations@qualcomm.com or call Qualcomm's toll-free number found here. Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. (Keep in mind that this email address is used to provide reasonable accommodations for individuals with disabilities. We will not respond here to requests for updates on applications or resume inquiries). Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies : Our Careers Site is only for individuals seeking a job at Qualcomm. Staffing and recruiting agencies and individuals being represented by an agency are not authorized to use this site or to submit profiles, applications or resumes, and any such submissions will be considered unsolicited. Qualcomm does not accept unsolicited resumes or applications from agencies. Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers. 3076101 Show more Show less
Posted 6 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The logging job market in India is vibrant and offers a wide range of opportunities for job seekers interested in this field. Logging professionals are in demand across various industries such as IT, construction, forestry, and environmental management. If you are considering a career in logging, this article will provide you with valuable insights into the job market, salary range, career progression, related skills, and common interview questions.
These cities are known for their thriving industries where logging professionals are actively recruited.
The average salary range for logging professionals in India varies based on experience and expertise. Entry-level positions typically start at INR 3-5 lakhs per annum, while experienced professionals can earn upwards of INR 10-15 lakhs per annum.
A typical career path in logging may include roles such as Logging Engineer, Logging Supervisor, Logging Manager, and Logging Director. Professionals may progress from entry-level positions to more senior roles such as Lead Logging Engineer or Logging Consultant.
In addition to logging expertise, employers often look for professionals with skills such as data analysis, problem-solving, project management, and communication skills. Knowledge of industry-specific software and tools may also be beneficial.
As you embark on your journey to explore logging jobs in India, remember to prepare thoroughly for interviews by honing your technical skills and understanding industry best practices. With the right preparation and confidence, you can land a rewarding career in logging that aligns with your professional goals. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2