Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 10.0 years
0 - 1 Lacs
bengaluru
Work from Office
Full Stack Developer experience needed Frontend: React, React Native, and TypeScript Backend:.NET (C#, ASP.NET Core), RESTful API development Cloud: Azure or GCP IoT Platforms: AWS IoT, Azure IoT, Google Cloud IoT
Posted 3 days ago
4.0 years
18 Lacs
vellore, tamil nadu, india
Remote
Experience : 4.00 + years Salary : INR 1800000.00 / year (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Suite Solvers) (*Note: This is a requirement for one of Uplers' client - An Atlanta based IT Services and IT Consulting Company) What do you need for this opportunity? Must have skills required: Docker, Vector Database, Fintech, Testing and deployment, Data Science, Artificial Intelligence (AI), Large Language Model APIs (LLM APIs), LLM APIs, Large Language Model (LLM), Prompt Engineering, FastAPI / Flask, Cloud An Atlanta based IT Services and IT Consulting Company is Looking for: About The Job SuiteSolvers is a boutique consulting firm that helps mid-market companies transform and scale through smart ERP implementations, financial automation, and operational strategy. We specialize in NetSuite and Acumatica, and we’re building tools that make finance and operations more intelligent and less manual. Our clients range from high-growth startups to billion-dollar enterprises. We’re hands-on, fast-moving, and results-driven—our work shows up in better decisions, faster closes, cleaner audits, and smarter systems. We’re not a bloated agency. We’re a small team with high standards. If you like solving real business problems with clean data pipelines, smart automation, and the occasional duct-tape hack that gets the job done—this might be your kind of place. We are looking for a Data Engineer. Essential Technical Skills AI/ML (Required) 2+ years hands-on experience with LLM APIs (OpenAI, Anthropic, or similar) Production deployment of at least one AI system that's currently running in production LLM framework experience with LangChain, CrewAI, or AutoGen (any one is sufficient) Function calling/tool use - ability to build AI systems that can call external APIs and functions Basic prompt engineering - understanding of techniques like Chain-of-Thought and ReAct patterns Python Development (Required) 3+ years Python development with strong fundamentals API development using Flask or FastAPI with proper error handling Async programming - understanding of async/await patterns for concurrent operations Database integration - working with PostgreSQL, MySQL, or similar relational databases JSON/REST APIs - consuming and building REST services Production Systems (Required) 2+ years building production software that serves real users Error handling and logging - building robust systems that handle failures gracefully Basic cloud deployment - experience with AWS, Azure, or GCP (any one platform) Git/version control - collaborative development using Git workflows Testing fundamentals - unit testing and integration testing practices Business Process (Basic Required) User requirements - ability to translate business needs into technical solutions Data quality - recognizing and handling dirty/inconsistent data Exception handling - designing workflows for edge cases and errors Professional Experience (Minimum) Software Engineering 3+ years total software development experience 1+ production AI project - any AI/ML system deployed to production (even simple ones) Cross-functional collaboration - worked with non-technical stakeholders Problem-solving - demonstrated ability to debug and resolve complex technical issues Communication & Collaboration Technical documentation - ability to write clear technical docs and code comments Stakeholder communication - explain technical concepts to business users Independent work - ability to work autonomously with minimal supervision Learning agility - quickly pick up new technologies and frameworks Educational Background (Any One) Formal Education Bachelor's degree in Computer Science, Engineering, or related technical field OR equivalent experience - demonstrable technical skills through projects/work Alternative Paths Coding bootcamp + 2+ years professional development experience Self-taught with strong portfolio of production projects Technical certifications (AWS, Google Cloud, etc.) + relevant experience [nice to have] Demonstrable Skills (Portfolio Requirements) Must Show Evidence Of One working AI application - GitHub repo or live demo of LLM integration Python projects - code samples showing API development and data processing Production deployment - any application currently running and serving users Problem-solving ability - examples of debugging complex issues or optimizing performance Nice to Have (Not Required) Financial services or fintech experience Vector databases (Pinecone, Weaviate) experience Docker/containerization knowledge Advanced ML/AI education or certifications How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 days ago
4.0 years
18 Lacs
madurai, tamil nadu, india
Remote
Experience : 4.00 + years Salary : INR 1800000.00 / year (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Suite Solvers) (*Note: This is a requirement for one of Uplers' client - An Atlanta based IT Services and IT Consulting Company) What do you need for this opportunity? Must have skills required: Docker, Vector Database, Fintech, Testing and deployment, Data Science, Artificial Intelligence (AI), Large Language Model APIs (LLM APIs), LLM APIs, Large Language Model (LLM), Prompt Engineering, FastAPI / Flask, Cloud An Atlanta based IT Services and IT Consulting Company is Looking for: About The Job SuiteSolvers is a boutique consulting firm that helps mid-market companies transform and scale through smart ERP implementations, financial automation, and operational strategy. We specialize in NetSuite and Acumatica, and we’re building tools that make finance and operations more intelligent and less manual. Our clients range from high-growth startups to billion-dollar enterprises. We’re hands-on, fast-moving, and results-driven—our work shows up in better decisions, faster closes, cleaner audits, and smarter systems. We’re not a bloated agency. We’re a small team with high standards. If you like solving real business problems with clean data pipelines, smart automation, and the occasional duct-tape hack that gets the job done—this might be your kind of place. We are looking for a Data Engineer. Essential Technical Skills AI/ML (Required) 2+ years hands-on experience with LLM APIs (OpenAI, Anthropic, or similar) Production deployment of at least one AI system that's currently running in production LLM framework experience with LangChain, CrewAI, or AutoGen (any one is sufficient) Function calling/tool use - ability to build AI systems that can call external APIs and functions Basic prompt engineering - understanding of techniques like Chain-of-Thought and ReAct patterns Python Development (Required) 3+ years Python development with strong fundamentals API development using Flask or FastAPI with proper error handling Async programming - understanding of async/await patterns for concurrent operations Database integration - working with PostgreSQL, MySQL, or similar relational databases JSON/REST APIs - consuming and building REST services Production Systems (Required) 2+ years building production software that serves real users Error handling and logging - building robust systems that handle failures gracefully Basic cloud deployment - experience with AWS, Azure, or GCP (any one platform) Git/version control - collaborative development using Git workflows Testing fundamentals - unit testing and integration testing practices Business Process (Basic Required) User requirements - ability to translate business needs into technical solutions Data quality - recognizing and handling dirty/inconsistent data Exception handling - designing workflows for edge cases and errors Professional Experience (Minimum) Software Engineering 3+ years total software development experience 1+ production AI project - any AI/ML system deployed to production (even simple ones) Cross-functional collaboration - worked with non-technical stakeholders Problem-solving - demonstrated ability to debug and resolve complex technical issues Communication & Collaboration Technical documentation - ability to write clear technical docs and code comments Stakeholder communication - explain technical concepts to business users Independent work - ability to work autonomously with minimal supervision Learning agility - quickly pick up new technologies and frameworks Educational Background (Any One) Formal Education Bachelor's degree in Computer Science, Engineering, or related technical field OR equivalent experience - demonstrable technical skills through projects/work Alternative Paths Coding bootcamp + 2+ years professional development experience Self-taught with strong portfolio of production projects Technical certifications (AWS, Google Cloud, etc.) + relevant experience [nice to have] Demonstrable Skills (Portfolio Requirements) Must Show Evidence Of One working AI application - GitHub repo or live demo of LLM integration Python projects - code samples showing API development and data processing Production deployment - any application currently running and serving users Problem-solving ability - examples of debugging complex issues or optimizing performance Nice to Have (Not Required) Financial services or fintech experience Vector databases (Pinecone, Weaviate) experience Docker/containerization knowledge Advanced ML/AI education or certifications How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 days ago
5.0 years
0 Lacs
chennai, tamil nadu, india
On-site
Job Description The minimum requirements we seek: 5+ years experience in Software Engineering. Bachelor’s degree in computer science, computer engineering or a combination of education and equivalent experience. Willingness to collaborate daily with team members. A strong curiosity around how to best use technology to amaze and delight our customers 2+ year experience with developing for and deploying to GCP cloud platforms Experience in development in at least some from each following categories: Languages: Java / Kotlin / JS / TS / Python / Other Frontend frameworks: Angular / React / Vue / Other Backend frameworks: Spring / Node / Other Proven experience understanding, practicing, and advocating for software engineering disciplines from eXtreme Programming (XP), Clean Code, Software Craftmanship, and Lean including: Paired / Extreme programming Test-first/Test Driven Development (TDD) Evolutionary design Minimum Viable Product FOSSA, SofarQube,42Crunch, etc., Responsibilities The Software Engineer will be responsible for the development and ongoing support/maintenance of the analytic solutions. Product And Requirements Management: Participate in and/or lead the development of requirements, features, user stories, use cases, and test cases. Participate in stand-up operations meetings. Author: Process and Design Documents Design/Develop/Test/Deploy: Work with the Business Customer, Product Owner, Architects, Product Designer, Software Engineers, and Security Controls Champion on solution design, development, and deployment. Operations: Generate Metrics, Perform User Access Authorization, Perform Password Maintenance, and Build Deployment Pipelines. Incident, Problem, And Change/Service Requests: Participate and/or lead incident, problem, change and service request-related activities. Includes root cause analysis (RCA). Includes proactive problem management/defect prevention activities. Qualifications Our preferred qualifications: Highly effective in working with other technical experts, Product Managers, UI/UX Designers and business stakeholders Delivered products that include web front-end development; JavaScript, CSS, frameworks like Angular, React etc. Comfortable with Continuous Integration/Continuous Delivery tools and pipelines e.g. Tekton, Terraform Jenkins, Cloud Build, etc. Experience with machine learning, mathematical modeling, and data analysis is a plus Experience with CA Agile Central (Rally), backlogs, iterations, user stories, or similar Agile Tools Experience in the development of microservices Understanding of fundamental data modeling Strong analytical and problem-solving skills
Posted 3 days ago
15.0 years
3 - 6 Lacs
gurgaon
On-site
Project Role : Technology Support Engineer Project Role Description : Resolve incidents and problems across multiple business system components and ensure operational stability. Create and implement Requests for Change (RFC) and update knowledge base articles to support effective troubleshooting. Collaborate with vendors and help service management teams with issue analysis and resolution. Must have skills : Cloud Automation DevOps Good to have skills : NA Minimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary: We are looking for an experienced Multi-Cloud FinOps Engineer to lead cloud financial operations across Azure, AWS, and GCP platforms. The role requires a blend of cloud engineering, cost analysis, and governance skills to improve cost visibility, enforce budget policies, and align cloud spending with business value. You will work closely with finance, DevOps, platform engineering, and procurement teams to drive cost transparency, forecasting, and optimization strategies across the organization. Roles and responsibilities: - Analyze cloud usage patterns and identify cost-saving opportunities (e.g., right-sizing, idle resources, reserved instances, autoscaling). - Configure and manage cost visibility tools (e.g., Azure Cost Management, AWS Cost Explorer, GCP Billing). - Implement budget alerts, spend tracking dashboards, and anomaly detection workflows. - Define and enforce cloud cost governance policies, tagging standards, and usage accountability frameworks. - Collaborate with business units to build chargeback/showback models and cost allocation reporting. - Partner with procurement to optimize contracting, discount models, and Enterprise Agreements (EA/RIs/SPs). - Build or manage FinOps tools such as CloudHealth, CloudCheckr, Apptio Cloudability, Yotascale, or native cloud billing APIs. - Develop scripts and automation pipelines (Python, PowerShell, or Terraform) to remediate cost inefficiencies and enforce policy-as-code. - Integrate cloud billing data into enterprise reporting platforms like Power BI, Snowflake, or Tableau. - Support finance and budgeting teams with cloud spend forecasts, budget variance analysis, and unit economics tracking. - Prepare monthly/quarterly executive reports and cloud cost KPIs. - Conduct trend analysis to predict future cloud spending and workload shifts. - Work with security, compliance, and architecture teams to balance cost, performance, and compliance. - Provide education and workshops for engineering teams on FinOps best practices. - Serve as an SME during cloud architecture and migration planning to ensure cost-aware decisions. Professional and Technical skills: - 5+ years of experience in cloud operations, cloud engineering, or FinOps - Expertise in multi-cloud environments: Azure, AWS, and/or GCP - Strong hands-on knowledge of cloud billing consoles, APIs, and optimization features - Experience with FinOps tools (e.g., CloudHealth, Cloudability, Apptio, ProsperOps) - Scripting experience in Python, PowerShell, or Bash - Proficiency with Excel, Power BI, or data visualization/reporting tools - FinOps Certified Practitioner (from FinOps Foundation) - Cloud certifications such as: - Azure Administrator / Solutions Architect - AWS Certified Cloud Practitioner / Solutions Architect - Google Professional Cloud Architect - Familiarity with Infrastructure as Code (IaC) for cost enforcement (e.g., Terraform, Bicep) - Strong analytical, communication, and stakeholder engagement skills - Attention to detail with a data-driven mindset - Ability to translate technical usage into financial impacts - Comfortable working in agile, cross-functional teams Additional information: - The candidate should have minimum 3 years of experience. - The position is at our Gurugram office. - A 15 year full time education is required. 15 years full time education
Posted 3 days ago
4.0 years
0 Lacs
chennai, tamil nadu, india
Remote
Role Overview: We are seeking skilled Backend Developers to design, build, and maintain efficient, scalable, and secure server-side logic and services. The ideal candidate will have strong expertise in Python, Flask, and Google Cloud Platform (GCP), with experience building APIs, handling databases, and integrating cloud services in production environments. Required Experience: 4+ Years Location: Chennai, Open for remote for strong candidates Key Responsibilities: Collaborate with project teams to understand business requirements and develop efficient, high-quality code. Design and implement low-latency, high-availability, and performant applications using frameworks such as Flask, or FastAPI. Integrate multiple data sources and databases into a unified system while ensuring seamless data storage and third-party library/package integration. Create scalable and optimized database schemas to support complex business logic and manage large volumes of data. Conduct thorough testing using pytest and unittest, debugging applications to ensure they run smoothly. Required Skills & Qualifications: 3+ years of experience as a Python developer with strong communication skills. Bachelor's degree in Computer Science, Software Engineering or a related field. In-depth knowledge of Python frameworks such as Flask, or FastAPI. Strong expertise in cloud technologies, GCP preferred. Deep understanding of microservices architecture, multi-tenant architecture, and best practices in Python development. Familiarity with serverless architecture and frameworks like GCP Cloud Functions. Experience with deployment using Docker, Nginx, Gunicorn. Hands-on experience with SQL and NoSQL databases such as MySQL and Firebase. Proficiency with Object Relational Mappers (ORMs) like SQLAlchemy. Demonstrated ability to handle multiple API integrations and write modular, reusable code. Strong knowledge of user authentication and authorization mechanisms across multiple systems and environments. Familiarity with scalable application design principles and event-driven programming in Python. Solid experience in unit testing, debugging, and code optimization. Hands-on experience with modern software development methodologies, including Agile and Scrum. Experience with CI/CD pipelines and automation tools like Jenkins, GitLab CI, or CircleCI. Experience with version control system.
Posted 3 days ago
0 years
0 Lacs
coimbatore, tamil nadu, india
On-site
Ability to grasp cloud platforms [AWS, Azure, GCP] Kubernetes and containerization for scalable deployments Basic knowledge with Performance testing tools like JMeter, LoadRunner or any other related tool. Good / Expertise in any of the programming languages like Java, Python, C or C++ Ability to analyze system metrics using profiling / monitoring tools like Instana, Dynatrace, Prometheus and Grafana A day in the life of an Infosys Equinox employee: As part of the Infosys Equinox delivery team, your primary role would be to ensure effective Design, Development, Validation and Support activities, to assure that our clients are satisfied with the high levels of service in the technology domain. You will gather the requirements and specifications to understand the client requirements in a detailed manner and translate the same into system requirements. A Clear understanding of HTTP / Network protocol concepts, designs & operations - TCP dump, Cookies, Sessions, Headers, Client Server Architecture. Core strength in Linux and Azure infrastructure provisioning including VNet, Subnet, Gateway, VM, Security groups, MySQL, Blob Storage, Azure Cache, AKS Cluster etc. Expertise with automating Infrastructure as a code using Terraform, Packer, Ansible, Shell Scripting and Azure DevOps. Expertise with patch management, APM tools like AppDynamics, Instana for monitoring and alerting. Knowledge in technologies including Apache Solr, MySQL, Mongo, Zookeeper, RabbitMQ, Pentaho etc. Knowledge with Cloud platform including AWS and GCP are added advantage. Ability to identify and automate recurring tasks for better productivity. Ability to understand, implement industry standard security solutions. Experience in implementing Auto scaling, DR, HA, Multi-region with best practices is added advantage. Ability to work under pressure, managing expectations from various key stakeholders. You will gather the requirements and specifications to understand the client requirements in a detailed manner and translate the same into system requirements. You will play a key role in the overall estimation of work requirements to provide the right information on project estimations to Technology Leads and Project Managers. You would be a key contributor to building efficient prog Ability to identify bottlenecks, debugging hotspots in optimizing performance Continuously learning with the latest trends in performance engineering, frameworks and methodologies.
Posted 3 days ago
4.0 - 8.0 years
0 Lacs
gurugram, haryana, india
On-site
Snowflake Developer (Immediate Joiner) Position: Snowflake Developer Experience: 4 to 8 years Location: Gurugram / Bangalore Employment Type: Full-time Salary: Up to 14 LPA Key Responsibilities: Develop and optimize Snowflake data pipelines and data models Work with large datasets to build scalable and high-performance data solutions Collaborate with Data Engineers, Analysts, and Business stakeholders to gather requirements Write efficient and reusable SQL queries and stored procedures Ensure data quality, integrity, and compliance with best practices Requirements: 4-8 years of hands-on experience in data engineering Strong expertise in Snowflake development Proficiency in SQL, performance tuning, and data warehousing concepts Experience with ETL/ELT pipelines Familiarity with cloud platforms (AWS/Azure/GCP) is a plus Strong problem-solving and communication skills
Posted 3 days ago
5.0 - 7.0 years
0 Lacs
chennai, tamil nadu, india
On-site
Job Description Experience: Minimum of 5-7 years of progressive experience in technical project management, leading complex software development, modernization and hybrid cloud. Proven experience managing projects using Agile (Scrum, Kanban) and/or Waterfall methodologies. Experience with large-scale enterprise systems, cloud platforms (e.g., AWS, Azure, GCP), or automotive technologies is highly desirable. Ability to collaborate with experienced architects and principal engineers to build a backlog, plan, align and execute complex programs. Technical Skills: Good understanding of enterprise architecture as a practice along with knowledge on standards and principles. Proven ability to translate complex business requirements into technical solutions that align with enterprise architectural standards and long-term technology roadmaps. Solid understanding of software development lifecycles (SDLC), modern software engineering practices, and their implications on architectural design and technical debt management. Familiarity with enterprise-level technology stacks, including cloud platforms (e.g., AWS, Azure, GCP), distributed systems, integration patterns (APIs, messaging queues), and data architectures. Knowledge of cybersecurity principles, data privacy, and compliance standards relevant to enterprise-wide technical solutions. Proficiency with project management software (e.g., Jira, Azure DevOps, Microsoft Project, Asana). Soft Skills: Exceptional leadership, team management, and interpersonal skills. Strong communication skills, with a knack for translating complex technical concepts to non-technical audiences. Good understanding of stakeholder management and ability to draw a consensus among conflicting viewpoints. Strong analytical, problem-solving, and decision-making capabilities. Ability to manage multiple priorities in a fast-paced, dynamic environment. Responsibilities Strategic Alignment & Prioritization: Collaborate closely with the Enterprise Architecture team to understand strategic objectives, architectural roadmaps, and key initiatives. Facilitate the prioritization process for architectural workstreams and technical debt reduction efforts, ensuring alignment with overall business value and technical strategy. Translate architectural vision and requirements into actionable project plans and epics, ensuring clarity and measurability. EA Initiative Planning & Roadmapping: Develop detailed plans for the implementation of architectural patterns, standards, and foundational technology projects, breaking down complex architectural initiatives into manageable phases and deliverables. Define scope, resources, timelines, and success metrics for projects driven by or impacting enterprise architecture. Proactively identify and manage dependencies between architectural projects and other ongoing development or release trains. Cross-Functional Alignment & Release Coordination: Act as a primary liaison between the Enterprise Architecture team and other product, engineering, and release teams to ensure seamless integration and adoption of architectural standards and solutions. Coordinate architectural deliverables with broader release cycles and product roadmaps, ensuring timely availability of architectural guidance and components. Facilitate communication and resolve potential conflicts to ensure architectural consistency across different solution domains. Dashboarding: Establish and maintain robust reporting mechanisms and dashboards to track the progress, health, and impact of enterprise architecture initiatives. Regularly communicate the status of architectural projects, key milestones, risks, and benefits to the EA team, senior leadership, and relevant stakeholders. Develop metrics to measure the effectiveness of architectural decisions and their contribution to technical excellence and business outcomes. Risk Management & Issue Resolution: Proactively identify, assess, and mitigate technical and architectural risks that could impact project delivery or the integrity of the enterprise landscape. Facilitate the resolution of complex technical and architectural challenges, escalating critical issues to the Enterprise Architecture leadership when necessary. Process Optimization & Best Practices: Contribute to the continuous improvement of enterprise architecture planning, delivery, and governance processes. Promote best practices in technical project management, emphasizing architectural soundness, scalability, and maintainability. Qualifications Basic Qualification Graduation or higher Prior experience in project management and technical manager roles Tool Knowledge JIRA and Confluence Good understanding of preparing boards and dashboards Good understanding of configuration, workflows, and reporting Certifications PPM or equivalent (nice to have)
Posted 3 days ago
10.0 years
0 Lacs
hyderābād
Remote
We are a global team of innovators and pioneers dedicated to shaping the future of observability. At New Relic, we build an intelligent platform that empowers companies to thrive in an AI-first world by giving them unparalleled insight into their complex systems. As we continue to expand our global footprint, we're looking for passionate people to join our mission. If you're ready to help the world's best companies optimize their digital applications, we invite you to explore a career with us! Your opportunity We're looking for a seasoned software engineer and technical leader to join our core Streaming Platform group as a Principal Engineer. This team is the heart of New Relic's data processing capabilities, building and operating the high-throughput, low-latency pipelines that power our entire product suite. You will be instrumental in shaping the future of how New Relic ingests, processes, and leverages telemetry data at a massive scale. If you’re ready for this job, you’ve spent years designing, building, and operating high-scale streaming data systems. You have deep expertise in technologies like Apache Kafka , Apache Flink , and cloud-native data services. You’ve faced the challenges of distributed data processing—and have the scars and successes to prove it. You are passionate about building robust, efficient, and elegant solutions to complex data problems. Being a Principal Engineer at New Relic At New Relic, Principal Engineers are force multipliers and technical visionaries, not gatekeepers. Your role is to elevate the teams around you, paint a clear picture of the technical future, and ensure everyone has the guidance and tools to get there. You'll help engineers see around corners, make durable architectural decisions, and avoid costly mistakes, empowering them to deliver incredible results. Our playground is one of the largest in the industry. We operate a multi-cloud environment that processes petabytes of data per day and scans more than 150 billion data points each minute . Your work will directly impact the performance and reliability of the platform that our customers—including more than 50% of the Fortune 100—rely on every second. What you'll do Define Architectural Vision: Define and drive the technical vision and long-term strategy for New Relic's core streaming data pipelines, ensuring they are scalable, reliable, and cost-effective. Technical Leadership & Mentorship: Serve as a technical leader and mentor for multiple engineering teams working on the streaming platform. You will guide design, promote best practices in stream processing, and elevate the technical bar for the entire organization. Hands-On Prototyping & Development: Engage in hands-on development for critical path projects, building prototypes to de-risk new technologies, and optimizing existing systems for performance or cost. Solve Hard Problems: Tackle our most complex technical challenges related to data consistency, fault tolerance, and performance at extreme scale. Cross-Functional Collaboration: Partner with product managers, engineering leaders, and other principal engineers to align the platform roadmap with business objectives and the needs of product engineering teams. Evangelize and Educate: As a distributed organization, clear documentation and communication are paramount. You will create architectural documents, tech talks, and best-practice guides to share knowledge across the company. What Your Playground Will Look Like One of the largest Apache Kafka deployments in the world, serving as the central nervous system for all New Relic data. A sophisticated stream processing environment utilizing Apache Flink and other frameworks to perform real-time data enrichment, aggregation, and analysis. A multi-cloud architecture (primarily AWS) leveraging services like Kubernetes (EKS), S3, and other cloud-native technologies. A polyglot environment with hundreds of services written predominantly in Java and Go . This role requires Must-have: 10+ years of software engineering experience, with a significant focus on building and operating high-throughput, low-latency distributed data systems. Deep, hands-on expertise with stream processing technologies such as Apache Kafka and Apache Flink . Proven experience designing and deploying large-scale systems on a major cloud platform ( AWS , GCP, or Azure). Strong proficiency in a systems programming language like Java or Go . Demonstrated ability to provide technical leadership, drive consensus, and mentor engineers across multiple teams. Excellent written and verbal communication skills, with experience articulating complex technical concepts to diverse audiences. Bonus points if you have Experience with data lake technologies and architectures (e.g., S3, Delta Lake, Iceberg). Contributions to open-source projects in the data streaming or distributed systems space. Knowledge of observability principles and experience working with telemetry data (metrics, logs, traces). An advanced degree (MS or PhD) in Computer Science or a related STEM field. Fostering a diverse, welcoming and inclusive environment is important to us. We work hard to make everyone feel comfortable bringing their best, most authentic selves to work every day. We celebrate our talented Relics’ different backgrounds and abilities, and recognize the different paths they took to reach us – including nontraditional ones. Their experiences and perspectives inspire us to make our products and company the best they can be. We’re looking for people who feel connected to our mission and values, not just candidates who check off all the boxes. If you require a reasonable accommodation to complete any part of the application or recruiting process, please reach out to resume@newrelic.com. We believe in empowering all Relics to achieve professional and business success through a flexible workforce model. This model allows us to work in a variety of workplaces that best support our success, including fully office-based, fully remote, or hybrid. Our hiring process In compliance with applicable law, all persons hired will be required to verify identity and eligibility to work and to complete employment eligibility verification. Note: Our stewardship of the data of thousands of customers’ means that a criminal background check is required to join New Relic. We will consider qualified applicants with arrest and conviction records based on individual circumstances and in accordance with applicable law including, but not limited to, the San Francisco Fair Chance Ordinance. Headhunters and recruitment agencies may not submit resumes/CVs through this website or directly to managers. New Relic does not accept unsolicited headhunter and agency resumes, and will not pay fees to any third-party agency or company that does not have a signed agreement with New Relic. New Relic develops and distributes encryption software and technology that complies with U.S. export controls and licensing requirements. Certain New Relic roles require candidates to pass an export compliance assessment as a condition of employment in any global location. If relevant, we will provide more information later in the application process. Candidates are evaluated based on qualifications, regardless of race, religion, ethnicity, national origin, sex, sexual orientation, gender expression or identity, age, disability, neurodiversity, veteran or marital status, political viewpoint, or other legally protected characteristics. Review our Applicant Privacy Notice at https://newrelic.com/termsandconditions/applicant-privacy-policy
Posted 3 days ago
7.0 years
1 - 5 Lacs
hyderābād
Remote
Company Description It all started in sunny San Diego, California in 2004 when a visionary engineer, Fred Luddy, saw the potential to transform how we work. Fast forward to today — ServiceNow stands as a global market leader, bringing innovative AI-enhanced technology to over 8,100 customers, including 85% of the Fortune 500®. Our intelligent cloud-based platform seamlessly connects people, systems, and processes to empower organizations to find smarter, faster, and better ways to work. But this is just the beginning of our journey. Join us as we pursue our purpose to make the world work better for everyone. Job Description ServiceNow is changing the way people work. With a service-orientation toward the activities, tasks and processes that make up day-to-day work life, we help the modern enterprise operate faster and be more scalable than ever before. We’re disruptive. We work hard but try not to take ourselves too seriously. We are highly adaptable and constantly evolving. We are passionate about our product, and we live for our customers. We have high expectations and a career at ServiceNow means challenging yourself to always be better. As a Sr. Software Engineer in the ETG Product Operations and Innovation Sustaining Engineering (POISE) team what you get to do in this role Responsibilities: As part of the ETG Product Ops Integrations team, proactively work on resolving L2/L3 support issues for Enterprise Integrations Applications. Ensure all the Incidents and requests are tracked and addressed in a timely manner with a sense of urgency or if need to be escalated to appropriate Engineering teams Track Key performance metrics using ETG and DT Ops dashboard (SLAs for response time, resolution time, customer satisfaction) and ensure all the SLA metrics are met Ensure smooth communication with other teams in DT or ServiceNow on Product support needs and goals Work closely with Engineering teams to gain understanding on the new feature releases for Integrations, so Operations Support team can handle issues from day 1 of feature release Gather and share customer feedback or recurring support issues with Engineering teams for potential feature improvements Provide insights and analytics on metrics, recurring issues, and prioritization of new features. Drive continuous improvement in operational efficiency and effectiveness. Qualifications To be successful in this role you have: 7+ years of total experience in building and operationalizing enterprise-grade systems and services Expert-level proficiency in Java or a similar OO language Strong experience with RESTful API design, Microservices architecture, database technologies (SQL/NoSQL) Experience working across multiple technology stacks, including: Strong experience with Integration Platforms like Boomi, SAP PI/PO Cloud/Containerization Platforms: Azure, AWS, GCP and Docker, Kubernetes Data Streaming Platforms: Kafka, Flink Web Infrastructure: API Gateways Kong, Azure APIM, Azure App Gateway Prior experience integrating Applications with ServiceNow platform or familiarity with ServiceNow platform is preferred Experience with monitoring tools, dashboards, and analytics Experience with AI/ML and automation in product operations is preferred Ability to work in a fast-paced and dynamic environment with a sense of urgency towards resolving issues and growth mindset and interest to learn and upskill Strong interpersonal skills, customer centric attitude, ability to deal with cultural diversity Knowledge of industry best practices in product support and operations. Additional Information Work Personas We approach our distributed world of work with flexibility and trust. Work personas (flexible, remote, or required in office) are categories that are assigned to ServiceNow employees depending on the nature of their work and their assigned work location. Learn more here. To determine eligibility for a work persona, ServiceNow may confirm the distance between your primary residence and the closest ServiceNow office using a third-party service. Equal Opportunity Employer ServiceNow is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, creed, religion, sex, sexual orientation, national origin or nationality, ancestry, age, disability, gender identity or expression, marital status, veteran status, or any other category protected by law. In addition, all qualified applicants with arrest or conviction records will be considered for employment in accordance with legal requirements. Accommodations We strive to create an accessible and inclusive experience for all candidates. If you require a reasonable accommodation to complete any part of the application process, or are unable to use this online application and need an alternative method to apply, please contact globaltalentss@servicenow.com for assistance. Export Control Regulations For positions requiring access to controlled technology subject to export control regulations, including the U.S. Export Administration Regulations (EAR), ServiceNow may be required to obtain export control approval from government authorities for certain individuals. All employment is contingent upon ServiceNow obtaining any export license or other approval that may be required by relevant export control authorities. From Fortune. ©2025 Fortune Media IP Limited. All rights reserved. Used under license.
Posted 3 days ago
10.0 years
4 - 9 Lacs
hyderābād
On-site
CACI International Inc is an American multinational professional services and information technology company headquartered in Northern Virginia. CACI provides expertise and technology to enterprise and mission customers in support of national security missions and government transformation for defense, intelligence, and civilian customers. CACI has approximately 23,000 employees worldwide. Headquartered in London, CACI Ltd is a wholly owned subsidiary of CACI International Inc., a publicly listed company on the NYSE with annual revenue in excess of US $6.2bn. Founded in 2022, CACI India is an exciting, growing and progressive business unit of CACI Ltd. CACI Ltd currently has over 2000 intelligent professionals and are now adding many more from our Hyderabad and Pune offices. Through a rigorous emphasis on quality, the CACI India has grown considerably to become one of the UKs most well-respected Technology centres. Role Overview: We’re looking for a hands on Technical Project Manager who can own the delivery of complex, cloud native products in a fast growing SaaS environment. You’ll partner with Engineering, Product, UX, DevOps and Business stakeholders to plan, execute and launch features that delight customers and scale globally. Key Responsibilities: End to End Project Ownership – Define scope, timelines, deliverables and success metrics for multiple concurrent product development streams. Agile Leadership – Champion Scrum/Kanban practices; facilitate sprint planning, stand ups, retrospectives and demos. Cross Functional Coordination – Align Engineering, QA, UX, Product, DevOps & Security teams, ensuring shared understanding of goals and dependencies. Stakeholder Communication – Provide clear, data driven status updates to leadership and customers; manage expectations and negotiate trade offs. Risk & Issue Management – Identify technical and delivery risks early, create mitigation plans and drive resolution. Quality & Release Management – Enforce definition of done, oversee test coverage, CI/CD pipelines and production release readiness. Budget & Resource Management – Forecast and track project budgets, resource allocation and vendor engagement. Process Improvement – Analyse sprint metrics (velocity, burndown, DORA, OKRs) and implement continuous improvement initiatives. Must Have Qualifications: 10+ years total experience in software development & delivery, with 3+ years as a Technical Project/Program Manager. Proven track record launching B2B/B2C SaaS products or cloud based platforms end to end. Solid foundation in software engineering (B.E./B.Tech. in CS/IT or equivalent). Expert knowledge of Agile/Scrum frameworks and tools (Jira, Azure DevOps, etc.). Working familiarity with microservices, REST APIs, CI/CD pipelines, and public cloud (AWS, Azure or GCP). Strong analytical mindset; comfortable using data to drive decisions and report progress. Exceptional written & verbal communication; able to influence technical and non technical audiences. Preferred Skills & Certifications: PMP, PRINCE2, PMI ACP, CSM or equivalent agile/project management certification. Experience scaling multi tenant SaaS platforms, subscription billing, and usage based pricing models. Exposure to DevOps/SRE practices, Infrastructure as Code, and security compliance (SOC 2, ISO 27001, GDPR/DPDP). Prior success in a high growth startup or global scale up environment. More about the Opportunity The Technical Project Manager is an excellent opportunity, and CACI Services India reward their staff well with a competitive salary and impressive benefits package which includes: Learning: Budget for conferences, training courses and other materials Health Benefits: Family plan with 4 children and parents covered Future You: Matched pension and health care package We understand the importance of getting to know your colleagues. Company meetings are held every quarter, and a training/work brief weekend is held once a year, amongst many other social events. CACI is an equal opportunities employer. Therefore, we embrace diversity and are committed to a working environment where no one will be treated less favourably on the grounds of their sex, race, disability, sexual orientation religion, belief or age. We have a Diversity & Inclusion Steering Group and we always welcome new people with fresh perspectives from any background to join the group An inclusive and equitable environment enables us to draw on expertise and unique experiences and bring out the best in each other. We champion diversity, inclusion and wellbeing and we are supportive of Veterans and people from a military background. We believe that by embracing diverse experiences and backgrounds, we can collaborate to create better outcomes for our people, our customers and our society.
Posted 3 days ago
6.0 years
4 - 8 Lacs
hyderābād
On-site
Job Description – Senior Data Engineer (Azure Focused) Experience Level: 6 -10 Yrs Location: Hyderabad About the Role We are seeking Senior Data Engineers with strong expertise in Microsoft Azure and modern data engineering practices. In this role, you will design, build, and optimize scalable data pipelines and solutions that empower business teams with timely, trusted, and actionable insights. You will play a key role in shaping our data ecosystem, ensuring performance, reliability, and governance across the data lifecycle. Key Responsibilities Design, build, and manage modern data pipelines on Azure using services such as Azure Data Factory, Synapse, Databricks, and Azure Data Lake . Develop scalable ETL/ELT frameworks and integrate structured/unstructured data from diverse sources. Implement data quality, security, and governance best practices across pipelines. Optimize data workflows for performance, scalability, and cost efficiency . Collaborate with data architects, data scientists, and business analysts to enable advanced analytics and AI/ML use cases. Leverage CI/CD and DevOps practices to automate deployment and monitoring of data solutions. Mentor junior engineers, share best practices, and contribute to a culture of continuous improvement. Required Skills & Experience 6–10 years of experience in Data Engineering , with at least 3+ years on Azure ecosystem . Hands-on expertise in: Azure Data Factory, Azure Synapse, Azure Databricks, Azure Data Lake, Azure SQL . Data modeling, SQL, Python, PySpark, and Delta Lake . CI/CD (Azure DevOps/GitHub), Infrastructure as Code (Terraform/ARM/Bicep) . Strong understanding of data governance, data security, and performance optimization . Experience with real-time data streaming (Kafka/Event Hubs/Stream Analytics) is a plus. Familiarity with modern data architectures (Data Lakehouse, Medallion Architecture, Lakehouse with Delta). Excellent problem-solving skills, ability to work in agile, cross-functional teams , and strong communication skills. Good to Have Exposure to AI/ML pipelines and MLOps integration. Knowledge of Power BI/DAX for data consumption layer. Experience with multi-cloud data integration (AWS/GCP + Azure).
Posted 3 days ago
4.0 years
18 Lacs
faridabad, haryana, india
Remote
Experience : 4.00 + years Salary : INR 1800000.00 / year (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Suite Solvers) (*Note: This is a requirement for one of Uplers' client - An Atlanta based IT Services and IT Consulting Company) What do you need for this opportunity? Must have skills required: Docker, Vector Database, Fintech, Testing and deployment, Data Science, Artificial Intelligence (AI), Large Language Model APIs (LLM APIs), LLM APIs, Large Language Model (LLM), Prompt Engineering, FastAPI / Flask, Cloud An Atlanta based IT Services and IT Consulting Company is Looking for: About The Job SuiteSolvers is a boutique consulting firm that helps mid-market companies transform and scale through smart ERP implementations, financial automation, and operational strategy. We specialize in NetSuite and Acumatica, and we’re building tools that make finance and operations more intelligent and less manual. Our clients range from high-growth startups to billion-dollar enterprises. We’re hands-on, fast-moving, and results-driven—our work shows up in better decisions, faster closes, cleaner audits, and smarter systems. We’re not a bloated agency. We’re a small team with high standards. If you like solving real business problems with clean data pipelines, smart automation, and the occasional duct-tape hack that gets the job done—this might be your kind of place. We are looking for a Data Engineer. Essential Technical Skills AI/ML (Required) 2+ years hands-on experience with LLM APIs (OpenAI, Anthropic, or similar) Production deployment of at least one AI system that's currently running in production LLM framework experience with LangChain, CrewAI, or AutoGen (any one is sufficient) Function calling/tool use - ability to build AI systems that can call external APIs and functions Basic prompt engineering - understanding of techniques like Chain-of-Thought and ReAct patterns Python Development (Required) 3+ years Python development with strong fundamentals API development using Flask or FastAPI with proper error handling Async programming - understanding of async/await patterns for concurrent operations Database integration - working with PostgreSQL, MySQL, or similar relational databases JSON/REST APIs - consuming and building REST services Production Systems (Required) 2+ years building production software that serves real users Error handling and logging - building robust systems that handle failures gracefully Basic cloud deployment - experience with AWS, Azure, or GCP (any one platform) Git/version control - collaborative development using Git workflows Testing fundamentals - unit testing and integration testing practices Business Process (Basic Required) User requirements - ability to translate business needs into technical solutions Data quality - recognizing and handling dirty/inconsistent data Exception handling - designing workflows for edge cases and errors Professional Experience (Minimum) Software Engineering 3+ years total software development experience 1+ production AI project - any AI/ML system deployed to production (even simple ones) Cross-functional collaboration - worked with non-technical stakeholders Problem-solving - demonstrated ability to debug and resolve complex technical issues Communication & Collaboration Technical documentation - ability to write clear technical docs and code comments Stakeholder communication - explain technical concepts to business users Independent work - ability to work autonomously with minimal supervision Learning agility - quickly pick up new technologies and frameworks Educational Background (Any One) Formal Education Bachelor's degree in Computer Science, Engineering, or related technical field OR equivalent experience - demonstrable technical skills through projects/work Alternative Paths Coding bootcamp + 2+ years professional development experience Self-taught with strong portfolio of production projects Technical certifications (AWS, Google Cloud, etc.) + relevant experience [nice to have] Demonstrable Skills (Portfolio Requirements) Must Show Evidence Of One working AI application - GitHub repo or live demo of LLM integration Python projects - code samples showing API development and data processing Production deployment - any application currently running and serving users Problem-solving ability - examples of debugging complex issues or optimizing performance Nice to Have (Not Required) Financial services or fintech experience Vector databases (Pinecone, Weaviate) experience Docker/containerization knowledge Advanced ML/AI education or certifications How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 days ago
8.0 - 14.0 years
0 Lacs
hyderābād
On-site
Are you seeking an environment where you can drive innovation? Does the prospect of working with top engineering talent get you charged up? Apple is a place where extraordinary people gather to do their best work. Together we create products and experiences people once couldn’t have imagined - and now can’t imagine living without. Apple’s IS&T manages key business and technical infrastructure at Apple - how online orders are placed, the customer experience with technology in our retail stores, how much network capacity we need around the world and much more. The SAP Global Systems team within IS&T runs the Operations and Financial transactional platform that powers all of Apple functions like Sales, Manufacturing, Distribution and Financials. Think platform-as-product! Our team delivers great developer experiences to our Program, Project and Development teams through curated set of tools, capabilities and processes offered through our Internal Developer Platform. We automate infrastructure operations, support complex service abstractions, build flexible workflows and curate a frictionless ecosystem that enables end-to-end collaboration to help drive productivity and engineering velocity. This is a tremendous opportunity for someone who has the skill to own initiatives and a passion to work in a highly integrated global solution platform! Join us in crafting solutions that do not yet exist! Description As a member of the Cloud Platform Engineering Team, you would architect and advocate for SRE principles across our engineering teams. You would develop scalable systems, foster operational excellence, and mentor a team of SRE and DevOps engineers. RESPONSIBILITIES: - Build up, lead and improve existing processes to provide 24x7 operational response for applications in public cloud platforms. - Maintain services once they are live by setting up monitoring, alerting and measuring availability, latency, and overall system health. - Own and review work for accuracy, quality, application performance and completeness. - Review release readiness through activities such as system design consulting, reviewing all observability and monitoring, capacity planning, and launch reviews. - Understand processes to improve incident coordination among Apple teams. - Keep up to date with the newest technologies and tools and voice support for their value with the development teams. - Understanding of Core Principles of DevSecOps. - Partner with architects and engineers to design and implement automation, operations, and support solutions. - Strive for top quality results and continuously look for ways to improve and enhance platform reliability, performance, and security. - Partner Management - Proficient in designing and implementing end-to-end observability frameworks using tools such as Prometheus, Grafana, CloudWatch, ELK/EFK, and OpenTelemetry, ensuring service reliability through dashboard design, SLOs/SLIs, and alerting systems. Minimum Qualifications 8 - 14 years of experience with a track record of building and leading Cloud Native SRE and Operations for AWS or GCP Hyperscalers. Solid experience supporting customer facing applications in an 24-7 uptime environment of distributed systems. Bachelor's degree or equivalent experience in Computer Science, Engineering or other relevant major. Collaborate with security, development, and infrastructure teams to implement a Zero Trust Architecture, handle secrets securely, and establish secure CI/CD pipelines. Preferred Qualifications Expertise in SRE principles, production-scale system design, and DevOps practices. Design / Architect the Solutions on Multi Cloud Environments / OnPrem systems. Solid understanding of core cloud services such as IAM, EC2/GCE, RDS/CloudSQL, EKS/GKE, CloudWatch/Cloud Monitoring, S3/GCS etc Understand complex landscape architectures. Have working knowledge of on-prem and cloud based hybrid architectures and infrastructure concepts of Regions, Availability Zones, VPCs/Subnets, Load balancers, API Gateways etc. Good understanding of common authentication schemes, certificates, secrets and protocols. Implement infrastructure-as-code practices applying tools such as Terraform, Helm, or Pulumi. Scripting and/or coding skills needed for automation, triaging and troubleshooting . Experience on any of these scripting Python, Go, Java etc. Experience with Planning and Designing the Disaster Recovery for BCP and Non BCP Applications. Core Knowledge on the Standard processes of Security and Governance. Expertise handling production incidents, with experience working towards resolution and collaborator communication during incidents. Track record with improving service reliability and efficiency. Ability to implement and coordinate telemetry using monitoring and observability tools Adapt at prioritizing multiple issues in a high stress environment. Good experience in designing and improving response processes Mentor and foster professional development of junior SREs, thereby contributing to operational excellence across diverse environments. Automation focus for operational efficiency - designing and implementing automation processes for repeatable and consistent service deployment A solid sense of ownership. critical thinking & interpersonal skills to work effectively across diverse & multi-functional teams. Certifications like AWS Solutions Architect, AWS DevOps Professional, GCP Professional Architect is a plus. Submit CV
Posted 3 days ago
0 years
0 Lacs
chennai, tamil nadu, india
On-site
Job Description Enthusiastic and self-motivated, with ability to execute Supply Chain Analytics projects proactively Meticulous attention to detail, with an overall passion for continuous improvement Innovative and creative, with a logical and methodical approach to problem solving Credible and articulate, with excellent communication, presentation, and interpersonal skills Responsibilities Execute high impact business projects with time bound and effective project management leveraging tools like Rally, Jira Gather business requirements and convert them into analytical problems and identify relevant tools, techniques, and an overall framework to provide solutions Use statistical methodologies leveraging analytical tools to support different business initiatives Continual enhancement of statistical techniques and their applications in solving business objectives Compile and analyze the results from modeling output and translate into actionable insights through dashboards Acquire and share deep knowledge of data utilized by the team and its business partners Participate in global conference calls and meetings as needed and manage multiple customer interfaces Execute analytics special studies and ad hoc analyses in a quick turn around time Evaluate new tools and technologies to improve analytical processes Efforts will focus on the following key areas: Domain – Supply Chain Analytics Various classical Statistical techniques such as Regression, Multivariate Analysis etc. Data Mining & Text Mining, NLP, Gen AI Time Series based forecasting modeling Experience with SQL and data warehousing (e.g. GCP/Hadoop/Teradata/Oracle/DB2) Experience using tools in BI, ETL, Reporting /Visualization/Dashboards Programming experience in languages like Python Exposure to Bigdata based analytical solutions Good Soft Skills Good Analysis and problem-solving skills. Ability to get Insights from Data, provide visualization, and storytelling. Flexibility to explore and work with newer technologies and tools. Ability to learn quickly, adapt, and set direction when faced with ambiguous scenarios. Excellent collaborative communication and Team skills Qualifications QUALIFICATIONS Bachelors/Masters/Any other quantitative Candidates should have significant hands-on experience with Analytics projects Experience in Python, SQL, GCP or any other cloud platforms highly desired
Posted 3 days ago
5.0 years
0 Lacs
hyderābād
On-site
Job Description Overview We are seeking a skilled Associate Manager – AIOps & MLOps Operations to support and enhance the automation, scalability, and reliability of AI/ML operations across the enterprise. This role requires a solid understanding of AI-driven observability, machine learning pipeline automation, cloud-based AI/ML platforms, and operational excellence. The ideal candidate will assist in deploying AI/ML models, ensuring continuous monitoring, and implementing self-healing automation to improve system performance, minimize downtime, and enhance decision-making with real-time AI-driven insights. Support and maintain AIOps and MLOps programs, ensuring alignment with business objectives, data governance standards, and enterprise data strategy. Assist in implementing real-time data observability, monitoring, and automation frameworks to enhance data reliability, quality, and operational efficiency. Contribute to developing governance models and execution roadmaps to drive efficiency across data platforms, including Azure, AWS, GCP, and on-prem environments. Ensure seamless integration of CI/CD pipelines, data pipeline automation, and self-healing capabilities across the enterprise. Collaborate with cross-functional teams to support the development and enhancement of next-generation Data & Analytics (D&A) platforms. Assist in managing the people, processes, and technology involved in sustaining Data & Analytics platforms, driving operational excellence and continuous improvement. Support Data & Analytics Technology Transformations by ensuring proactive issue identification and the automation of self-healing capabilities across the PepsiCo Data Estate. Responsibilities Support the implementation of AIOps strategies for automating IT operations using Azure Monitor, Azure Log Analytics, and AI-driven alerting. Assist in deploying Azure-based observability solutions (Azure Monitor, Application Insights, Azure Synapse for log analytics, and Azure Data Explorer) to enhance real-time system performance monitoring. Enable AI-driven anomaly detection and root cause analysis (RCA) by collaborating with data science teams using Azure Machine Learning (Azure ML) and AI-powered log analytics. Contribute to developing self-healing and auto-remediation mechanisms using Azure Logic Apps, Azure Functions, and Power Automate to proactively resolve system issues. Support ML lifecycle automation using Azure ML, Azure DevOps, and Azure Pipelines for CI/CD of ML models. Assist in deploying scalable ML models with Azure Kubernetes Service (AKS), Azure Machine Learning Compute, and Azure Container Instances. Automate feature engineering, model versioning, and drift detection using Azure ML Pipelines and MLflow. Optimize ML workflows with Azure Data Factory, Azure Databricks, and Azure Synapse Analytics for data preparation and ETL/ELT automation. Implement basic monitoring and explainability for ML models using Azure Responsible AI Dashboard and InterpretML. Collaborate with Data Science, DevOps, CloudOps, and SRE teams to align AIOps/MLOps strategies with enterprise IT goals. Work closely with business stakeholders and IT leadership to implement AI-driven insights and automation to enhance operational decision-making. Track and report AI/ML operational KPIs, such as model accuracy, latency, and infrastructure efficiency. Assist in coordinating with cross-functional teams to maintain system performance and ensure operational resilience. Support the implementation of AI ethics, bias mitigation, and responsible AI practices using Azure Responsible AI Toolkits. Ensure adherence to Azure Information Protection (AIP), Role-Based Access Control (RBAC), and data security policies. Assist in developing risk management strategies for AI-driven operational automation in Azure environments. Prepare and present program updates, risk assessments, and AIOps/MLOps maturity progress to stakeholders as needed. Support efforts to attract and build a diverse, high-performing team to meet current and future business objectives. Help remove barriers to agility and enable the team to adapt quickly to shifting priorities without losing productivity. Contribute to developing the appropriate organizational structure, resource plans, and culture to support business goals. Leverage technical and operational expertise in cloud and high-performance computing to understand business requirements and earn trust with stakeholders. Qualifications 5+ years of technology work experience in a global organization, preferably in CPG or a similar industry. 5+ years of experience in the Data & Analytics field, with exposure to AI/ML operations and cloud-based platforms. 5+ years of experience working within cross-functional IT or data operations teams. 2+ years of experience in a leadership or team coordination role within an operational or support environment. Experience in AI/ML pipeline operations, observability, and automation across platforms such as Azure, AWS, and GCP. Excellent Communication: Ability to convey technical concepts to diverse audiences and empathize with stakeholders while maintaining confidence. Customer-Centric Approach: Strong focus on delivering the right customer experience by advocating for customer needs and ensuring issue resolution. Problem Ownership & Accountability: Proactive mindset to take ownership, drive outcomes, and ensure customer satisfaction. Growth Mindset: Willingness and ability to adapt and learn new technologies and methodologies in a fast-paced, evolving environment. Operational Excellence: Experience in managing and improving large-scale operational services with a focus on scalability and reliability. Site Reliability & Automation: Understanding of SRE principles, automated remediation, and operational efficiencies. Cross-Functional Collaboration: Ability to build strong relationships with internal and external stakeholders through trust and collaboration. Familiarity with CI/CD processes, data pipeline management, and self-healing automation frameworks. Strong understanding of data acquisition, data catalogs, data standards, and data management tools. Knowledge of master data management concepts, data governance, and analytics.
Posted 3 days ago
5.0 years
2 - 4 Lacs
hyderābād
On-site
About this role: Wells Fargo is seeking a Lead Software Engineer. We believe in the power of working together because great ideas can come from anyone. Through collaboration, any employee can have an impact and make a difference for the entire company. Explore opportunities with us for a career in a supportive environment where you can learn and grow. In this role, you will: Lead complex technology initiatives including those that are companywide with broad impact Act as a key participant in developing standards and companywide best practices for engineering complex and large scale technology solutions for technology engineering disciplines Design, code, test, debug, and document for projects and programs Review and analyze complex, large-scale technology solutions for tactical and strategic business objectives, enterprise technological environment, and technical challenges that require in-depth evaluation of multiple factors, including intangibles or unprecedented technical factors Make decisions in developing standard and companywide best practices for engineering and technology solutions requiring understanding of industry best practices and new technologies, influencing and leading technology team to meet deliverables and drive new initiatives Collaborate and consult with key technical experts, senior technology team, and external industry groups to resolve complex technical issues and achieve goals Lead projects, teams, or serve as a peer mentor Required Qualifications: 5+ years of Software Engineering experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education Additional Required Qualification: Hands-on experience with Harness for CI/CD management Experience in handling Python projects on OCP environment Proven track record of building automation solutions using Python and Powershell Exposure to AI/ML model integration or usage of LLMs in engineering tools or automation workflows Experience with Cloud Services and infrastructure Automation (GCP, OCP) Familiarity with containerization (Docker, Kubernetes) and monitoring tools Desired Qualifications: B. Tech or equivalent educational qualification Have 5 years of AI/ML using Python development projects experience Job Expectations: Design and Develop AI-driven automation solutions Evaluate and adopt appropriate AI/ML models or LLMs to automate decision making or streamline manual engineering tasks Develop solutions using Python, PowerShell, Bash by following the enterprise standard CI/CI pipeline Ready to learn new tools and techniques and show the results in the deliverables Establish design & coding best practices and ensure those are aligned with US & IND management and successfully followed within the team Coordinate with Platform Teams, Development & Infrastructure teams to identify the automation opportunities, provide ROI, prioritize the tasks, implement bug free code into the production environment Guide the team in using Python and scripting (PowerShell/Bash) to build scalable automation pipelines. Build REST APIs, data parsing tools, and integration scripts using Python and third party libraries Experience in using the tools App Dynamics EUM, Grafana, SPLOC, BigPanda, Prometheus, Dynatrace Have to work on OpenTelemetry by coordinating with enterprise OTel Team and Vertical Platform Teams Design, Configure and maintain robust CI/CD pipelines using Github and Harness Ensure reliable deployment, rollback strategies and environment configuration management Define metrics and implement observability for the entire CI/CD pipeline Create and manage infrastructure-as-code (IaC) solutions (using Terraform, Powershell DSC) Automate route infrastructure tasks and integrations with cloud platforms Work closely with Production Support, Development and Infrastructure teams to understand automation needs Translate complex technical needs into actionable development plans Provide regular updates, demos and documentation of solutions and automation tools Stay current with the latest trends in AI, MLOps, automation tools and cloud-native practices Identify opportunities to reduce manual toil and improve deployment speed, accuracy and repeatability Posting End Date: 26 Aug 2025 *Job posting may come down early due to volume of applicants. We Value Equal Opportunity Wells Fargo is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other legally protected characteristic. Employees support our focus on building strong customer relationships balanced with a strong risk mitigating and compliance-driven culture which firmly establishes those disciplines as critical to the success of our customers and company. They are accountable for execution of all applicable risk programs (Credit, Market, Financial Crimes, Operational, Regulatory Compliance), which includes effectively following and adhering to applicable Wells Fargo policies and procedures, appropriately fulfilling risk and compliance obligations, timely and effective escalation and remediation of issues, and making sound risk decisions. There is emphasis on proactive monitoring, governance, risk identification and escalation, as well as making sound risk decisions commensurate with the business unit's risk appetite and all risk and compliance program requirements. Candidates applying to job openings posted in Canada: Applications for employment are encouraged from all qualified candidates, including women, persons with disabilities, aboriginal peoples and visible minorities. Accommodation for applicants with disabilities is available upon request in connection with the recruitment process. Applicants with Disabilities To request a medical accommodation during the application or interview process, visit Disability Inclusion at Wells Fargo . Drug and Alcohol Policy Wells Fargo maintains a drug free workplace. Please see our Drug and Alcohol Policy to learn more. Wells Fargo Recruitment and Hiring Requirements: a. Third-Party recordings are prohibited unless authorized by Wells Fargo. b. Wells Fargo requires you to directly represent your own experiences during the recruiting and hiring process.
Posted 3 days ago
0 years
0 Lacs
hyderābād
On-site
What's this role about? Here's how you'll contribute: You'll do this by: Managing and owning all aspects of technical development and delivery Understanding requirements and writing technical architecture documents Ensuring the code reviews and development best practices / processes to be followed Planning end-to-end technical scope of the project and customer engagement areas, including planning sprint and deliveries Estimating efforts, identifying risks, and providing technical support whenever needed Demonstrating the ability to multitask and re-prioritizing responsibilities based on dynamic requirements Leading and mentoring small-sized teams Core Skills: Primary Skills .Net Core + React + Web API + Any Cloud (GCP/Azure/ AWS) Secondary Skills Microservices, Design Patterns, Solid Principles, SQL and NoSQL DB Skills Level Communication Skills Professional Coding Skills Expert C#, .Net, MVC Expert .Net Core Expert Web API Expert NoSQL database Professional Microservices Nice to have Cloud (AWS/AZURE/GCP) Nice to have Design Patterns & Principles Professional – Candidates should know at least 2 to 3 types of Design patterns SQL Database Professional Unit Testing, Code Coverage Etc Expert React Expert Desired Skills: How we’d like you to lead: Advantage Zensar We are a technology consulting and services company with 11, 800+ associates in 33 global locations. More than 130 leading enterprises depend on our expertise to be more disruptive, agile and competitive. We focus on conceptualizing, designing, engineering, marketing, and managing digital products and experiences for high-growth companies looking to disrupt through innovation and velocity. Zensar Technologies is an Equal Employment Opportunity (EEO) and Affirmative Action Employer, encouraging diversity in the workplace. Please be assured that we will consider all qualified applicants fairly, regardless of race, creed, color, ancestry, religion, sex, national origin, citizen status, age, sexual orientation, gender identity, disability, marital status, family medical leave status, or protected veterans’ status. Zensar is a place where you are free to express yourself in an environment that values individuality, nurtures development and is mindful of wellbeing. We put our people and customers at the center of everything that we do. Our core values include: Putting people first Client-centricity Collaboration Grow. Own. Achieve. Learn. with Zensar
Posted 3 days ago
5.0 years
1 - 5 Lacs
hyderābād
Remote
Software Engineer II Hyderabad, Telangana, India Date posted Aug 22, 2025 Job number 1859963 Work site Up to 50% work from home Travel 0-25 % Role type Individual Contributor Profession Software Engineering Discipline Software Engineering Employment type Full-Time Overview Ready to shape the future of how Microsoft operates its ~$250B+ business portfolio? Are you passionate about AI, data, and transformative user experiences? Do you bring energy, curiosity, and a strong sense of ownership to your work? The Finance Data & Experiences (FD&E) organization is on a mission to redefine how Microsoft measures, monitors, and optimizes its global business — and we’re looking for top talent to join us. This is a unique opportunity to lead with bold ideas, apply cutting-edge technology, and work across Finance, Sales, Marketing, Business Operations, and Product Engineering to deliver high-impact business solutions. The right candidate will thrive in fast-paced, cross-functional environments, bring fresh thinking to complex problems, and be eager to learn to take ownership of end-to-end processes and outcomes. Join us and be part of a team that’s pushing the boundaries of innovation, taking risks, and implementing AI to drive business excellence At FD&E, we foster a culture of customer centricity, innovation, agility, and transparency — and we’re building a team that’s ready to help Microsoft chart its next chapter in AI-driven business excellence. Qualifications Minimum Qualifications Bachelor's degree in computer science, Engineering, or related field OR equivalent practical experience. 5+ years of strong programming skills in one or more languages: C#, Java, Python, JavaScript, C++. Business fluency in English (read, write, speak). Preferred Qualifications Exposure to Microsoft Azure or other cloud platforms (GCP, AWS). Familiarity with AI/ML fundamentals , tools (like Azure ML, OpenAI), or data analytics workflows. Understanding of version control systems (e.g., Git) and DevOps practices. Curiosity, collaboration mindset, and a passion for solving real-world customer problems. Responsibilities As a Software Engineer II at Microsoft, you’ll be part of a team of world-class engineers leveraging cutting-edge Microsoft Cloud and AI technologies to deliver modern, scalable, and intelligent systems that drive the Microsoft business forward. This is a unique opportunity to kickstart and grow your career with deep hands-on exposure to Microsoft technologies, while developing your software engineering skills in an inclusive, growth-oriented environment. Design, develop, deploy, and operate scalable cloud-based data, analytics, automation and tooling solutions using modern data platforms and Cloud services. Integrate AI capabilities such as Azure OpenAI , Cognitive Services , and ML models to enhance system intelligence and user productivity. engineering best practices as you design and deliver high-quality, scalable solutions. Own your solution end-to-end through design, implementation, and operations. Implement robust monitoring, logging and alerting for proactive issue detection. Leverage telemetry and usage analytics to understand customer behavior and inform product decisions. Collaborate cross-functionally across stakeholders, product managers, and other engineers to deliver integrated, customer-focused experiences. Contribute to the evolution of Microsoft’s data platforms through technical feedback and innovation. Champion a culture of diversity , inclusion , customer obsession , and continuous learning . Benefits/perks listed below may vary depending on the nature of your employment with Microsoft and the country where you work. Industry leading healthcare Educational resources Discounts on products and services Savings and investments Maternity and paternity leave Generous time away Giving programs Opportunities to network and connect Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.
Posted 3 days ago
3.0 - 8.0 years
11 - 15 Lacs
mumbai
Work from Office
About the role As an Infrastructure Cloud Risk Assessment Manager, you are expected to have a solid understanding and experience of major cloud-native architectures, expertise in identity and access management, familiarity with various data encryption methods, and knowledge of cloud compliance regulations. You need to ensure availability, reliability, security and performance and resilient architecture to address customers/client business challenges and accelerate technology adoption to improve the product services. You need to ensure control on security by designing principles of applications hosted in public cloud (Azure, AWS, GCP, OCI). Technical understanding on zero-trust architecture and micro segmentation along with hands-on experience with SIEM (Security Information and Event Management) tools to proactively monitor, analyse, and respond to security incidents is an important aspect. Key Responsibilities Identifying Vulnerabilities Understanding of cloud architecture review, and virtualization. Conduct cloud security assessments, across but not limited to the following domains: * Network and Perimeter Security * Data Protection and Backup Management * Identity and Access Management * Log Management and Monitoring Analysis & Reporting Identify and analyse the risks associated. Provide recommendations for the identified findings and develop the road-map. Contribute in creating and enforcing security policies, procedures, and best practices across the organization. Implement Security Measures Develop and implement robust security measures for cloud environments, ensuring the confidentiality, integrity, and availability of data. Contribute in creating and enforcing security policies, procedures, and best practices across the organization. Collaborate Work closely with cross-functional teams to integrate security controls seamlessly into cloud-based architectures and applications. Collaborate with other IT professionals, including network engineers, developers, and system administrators, to integrate cloud security measures into existing systems and processes. Qualifications & Skills Educational Qualification Engineering Graduate in CS, IT, EC or InfoSec, CyberSec or MCA equivalent with certifications such as CISSP, CISM, AWS Certified Security, etc. Compliance Assist in securing the IT landscape/ecosystem built on-premises and multi-cloud environment. Technical Skills Proficient in cloud security assessment, across all the deployment and service models IaaS, PaaS, SaaS. Experience with the cloud-native services across major cloud service providers (AWS, GCP, Azure, OCI). Communication skills Outstanding communication abilities. Ability to effectively communicate the required recommendations.
Posted 3 days ago
6.0 - 11.0 years
5 - 9 Lacs
mumbai
Work from Office
About the role As a SOC Analyst - Detection Engineering in the banks security operations center (SOC), the individual will be responsible to strengthen the creation and optimization of Analytical rules and alerts configured in the banks SIEM platform. You will be responsible to build analytical correlational rules in the banks SIEM platform covering network, systems and endpoints, cloud (SAAS, IAAS and PAAS) and applications (both COTS and internally developed). You will be responsible to provide expert guidance and support to the security operations team in the use of for threat hunting and incident investigation and analysing the detected incidents to identify lessons learned to improve response processes and make recommendations for enhancing security posture. You will be also responsible for developing and maintaining documentation for Analytical rules processes and procedures. Key Responsibilities Business Understanding Accountable to ensure all security anomalous activities are detected by the banks SIEM platform and false positives are kept to a minimum. Collaborate Verify the ingested logs and ensure log parsing to normalize the events. Implement a testing methodology to test the alerts configured and obtain sign off before releasing into production. Reporting Stay Up to date with the latest trends and developments in cybersecurity and SIEM technologies and recommend improvements to the organization security posture. Qualifications & Skills Educational Qualification Engineering Graduate in CS, IT, EC or InfoSec, CyberSec or MCA equivalent with experience in cloud security with any of the following - Microsoft Azure, Google cloud, Ability to develop and implement security policies, procedures and best practices. Experience At least 5 years of experience working as a SOC analysts responsible to create SIEM rules/alerts. Hands-on experience in creation of security alerts in any of the commonly used SIEM solutions is a must. Certifications SIEM Certification from any of the leading SIEM OEMs Splunk, Palo Alto, Securonix, LogRhythm, etc,. CEH or CISSP CCNA Security and/or any of the Cloud security certifications (AWS, GCP, Azure, OCI). Compliance Knowledge of Networking components, Servers (RHEL, Windows, etc.) and Endpoints, cloud infrastructure along with Machine learning models used for detection of security alerts. Knowledge of various log types, event parsing and ingestion mechanisms across Systems, networks, cloud and commonly used applications in banks. Communication Skills Excellent communication and interpersonal skills. Synergize with the Team Working with the designated bank personnel to ensure alignment with RBI guidelines on detection of security alerts applicable to banks. Should have strong understanding of cybersecurity principles, threat detection and incident response.
Posted 3 days ago
6.0 years
0 Lacs
hyderābād
On-site
Why We Work at Dun & Bradstreet Dun & Bradstreet unlocks the power of data through analytics, creating a better tomorrow. Each day, we are finding new ways to strengthen our award-winning culture and accelerate creativity, innovation and growth. Our 6,000+ global team members are passionate about what we do. We are dedicated to helping clients turn uncertainty into confidence, risk into opportunity and potential into prosperity. Bold and diverse thinkers are always welcome. Come join us! Learn more at dnb.com/careers. This role is responsible for improving customer satisfaction and supporting revenue generation by analyzing and controlling data used for products, scoring and analytical models to lead the technical support given to trade partners, Data Operations Analysts and Trade departments globally in solving trade problems and trade related issues. Lead data ingestion project to onboard new markets or transform data sources from legacy platforms onto modern cloud environment. Key Responsibilities: Design, develop, and maintain scalable data pipelines and ETL processes. Monitor and troubleshoot data workflows to ensure data quality, availability, and performance. Automate manual processes and develop innovative data tools. Evaluate and implement recent technology solutions, end-to-end process ownership. Communicate with stakeholders Conduct knowledge exchange sessions with technical and non-technical audiences. Build new analytical processes, provide insight to data quality issues and implement data quality improvement processes. Research and implement modern data mastering techniques to increase derived insight on disparate data sources. Support other data engineers with design of ETL processes, code reviews, and knowledge sharing. Develop and maintain data documentation, including data dictionaries, data flow diagrams, and data lineage. Key Skills: 6+ years of experience in data engineering or a related field. Bachelor's degree in computer science, Information Technology, or a related discipline. Strong proficiency in SQL and hands-on experience with at least one programming language such as PHP, Python, or Java. Ability to utilize the network, applications, operating system monitoring and troubleshooting. Take ownership of existing applications for further development/improvements Work closely with related groups to ensure business continuity A self-motivated learner with strong customer focus and with quality Logical with very strong problem-solving skills Strong understanding of data modeling, data warehousing, and database design Experience with hosted environments AWS/Azure/GCP or other cloud service providers Analytical skills and able to perform analysis on code bases to increase performance. Strong team player with excellent listening and communication skills Fluent English written and verbal Results oriented, flexible with an enthusiastic approach Ability to respond quickly to Customer demands and market conditions. All Dun & Bradstreet job postings can be found at https://www.dnb.com/about-us/careers-and-people/joblistings.html and https://jobs.lever.co/dnb. Official communication from Dun & Bradstreet will come from an email address ending in @dnb.com. Notice to Applicants: Please be advised that this job posting page is hosted and powered by Lever. Your use of this page is subject to Lever's Privacy Notice and Cookie Policy, which governs the processing of visitor data on this platform.
Posted 3 days ago
1.0 years
0 Lacs
gurugram, haryana, india
Remote
Colt provides network, voice and data centre services to thousands of businesses around the world, allowing them to focus on delivering their business goals instead of the underlying infrastructure. Why we need this role This is a 1 year interim position covering Maternity Leave. Leading a small international team you will help us to continue our journey to build and grow our analytics capability. In this pivotal role, you will be building on existing foundations to deliver data-driven insights that inform strategic HR decisions across the globe. This is a unique opportunity to make a meaningful impact in a growing function, contributing to the development of tools, processes, and reporting that support our global workforce. We are looking for someone who brings their own fresh thinking and a proactive mindset, someone who’s excited to create new solutions, drive innovation, and introduce compelling, insight-driven storytelling Join us and you will be part of a fast-growing community of like-minded experts to grow and learn alongside you in your career. What You Will Do Manage the People Analytics activities across all areas of HR and build effective and collaborative partnerships with stakeholders. Work with HR business partners and leaders of the different HR COI’s to understand their objectives and establish their reporting needs and key performance indicators. Support the team to introduce visualisation and the use of Power BI adapting current dashboards and creating new solutions making insights digestible. Introduce analytical methodologies using storytelling to describe trends, patterns and insights. Influence and partner with a wide range of cross-functional stakeholders from within the HR, Colt CEO Office, IT, Finance, Data Office, Projects & Process Transformation team, as well as the wider business, to design and implement robust, globally scalable HR solutions. Work with stakeholders to build a strategy for data and analytics. Lead ad hoc projects as required, working in partnership with global stakeholders, including on-time closure of Audit actions. Champion insight driven approaches to problem solving, decision-making, and help to enhance the data and insight culture across the business. Outline, establish and ensure the delivery of high quality and timely HR reporting products to the respective HR teams. Own the development of the Global HR reporting and analytics roadmap. Ensure reporting processes and items are fully documented. Ensure appropriate access and privacy controls are in place for all reporting products within HR. Develop the Global People Analytics team members through exposure to transformation initiatives and direct coaching to maintain a high performing team. What We're Looking For Skills & Experience People Analytics Leadership Experience Experience leading small to mid-sized analytics teams or cross-functional project teams. Strong program management skills: managing global initiatives, timelines, and deliverables. Proven track record of developing team capability, using data to solve HR problems: workforce planning, retention modelling, employee sentiment analysis, etc. HR Domain Expertise In-depth understanding of People functions: talent acquisition, performance management, engagement, attrition, DEI, learning & development. Familiarity with global HR practices, legal considerations, and cultural nuances. Data & Analytical Skills Proficiency in tools like SQL, Python, or R for data analysis. Expertise in HRIS systems (e.g. SAP SuccessFactors, Workday, Oracle Fusion), survey platforms (e.g. Qualtrics, Glint), and data visualization tools (e.g. Power BI, Tableau). Ability to build and interpret statistical models, predictive analytics, and advanced dashboards. Strategic & Consulting Skills Experience working closely with senior HR and business leaders to shape people strategies. Ability to influence stakeholders, present complex data clearly, and connect analytics to business outcomes. Strong storytelling skills using data. High emotional intelligence (EQ) and resiliency. Qualifications Preferred Degrees: Statistics, Data Science, Organizational Psychology, Business Analytics, or related fields. Expertise in HRIS systems (e.g. SAP SuccessFactors, Workday, Oracle Fusion), survey platforms (e.g. Qualtrics, Glint), and data visualization tools (e.g. Power BI, Tableau). Proficiency in tools like SQL, Python, or R for data analysis. Experience in supporting setup of big data / data lake /data warehousing (e.g. GCP, AWS, Azure) for HR desirable. What We Offer You Looking to make a mark? At Colt, you’ll make a difference. Because around here, we empower people. We don’t tell you what to do. Instead, we employ people we trust, who come together across the globe to create intelligent solutions. Our global teams are full of ambitious, driven people, all working together towards one shared purpose: to put the power of the digital universe in the hands of our customers wherever, whenever and however they want. We give our people the opportunity to inspire and lead teams, and work on projects that connect people, cities, businesses, and ideas. We want you to help us change the world, for the better. Diversity and inclusion Inclusion and valuing diversity of thought and experience are at the heart of our culture here at Colt. From day one, you’ll be encouraged to be yourself because we believe that’s what helps our people to thrive. We welcome people with diverse backgrounds and experiences, regardless of their gender identity or expression, sexual orientation, race, religion, disability, neurodiversity, age, marital status, pregnancy status, or place of birth. Most Recently We Have Signed the UN Women Empowerment Principles which guide our Gender Action Plan Trained 60 (and growing) Colties to be Mental Health First Aiders Please speak with a member of our recruitment team if you require adjustments to our recruitment process to support you. For more information about our Inclusion and Diversity agenda, visit our DEI pages. Benefits Our benefits support you through all parts of life, for both physical and mental health. Flexible working hours and the option to work from home. Extensive induction program with experienced mentors and buddies. Opportunities for further development and educational opportunities. Global Family Leave Policy. Employee Assistance Program. Internal inclusion & diversity employee networks. A global network When you join Colt you become part of our global network. We are proud of our colleagues and the stories and experience they bring – take a look at ‘Our People’ site including our Empowered Women in Tech.
Posted 3 days ago
4.0 - 7.0 years
7 - 9 Lacs
india
On-site
Minimum Qualification: B.E./B.Tech in Computer Science or equivalent Experience: 4–7 years Salary Range: As per company norms Work Profile: Backend development using Python (preferably Django or FastAPI) Designing and maintaining scalable RESTful APIs Working with relational and NoSQL databases (PostgreSQL/MongoDB) Experience with microservices and deployment on cloud platforms (AWS/GCP/Azure) is preferred Expectations: Strong understanding of clean code, performance, and testability Ability to own end-to-end backend modules Good collaboration with frontend, DevOps, and product teams Job Type: Full-time Pay: ₹60,000.00 - ₹80,000.00 per month Benefits: Provident Fund
Posted 3 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
55803 Jobs | Dublin
Wipro
24489 Jobs | Bengaluru
Accenture in India
19138 Jobs | Dublin 2
EY
17347 Jobs | London
Uplers
12706 Jobs | Ahmedabad
IBM
11805 Jobs | Armonk
Bajaj Finserv
11514 Jobs |
Amazon
11476 Jobs | Seattle,WA
Accenture services Pvt Ltd
10903 Jobs |
Oracle
10677 Jobs | Redwood City