Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Company Description WNS (Holdings) Limited (NYSE: WNS), is a leading Business Process Management (BPM) company. We combine our deep industry knowledge with technology and analytics expertise to co-create innovative, digital-led transformational solutions with clients across 10 industries. We enable businesses in Travel, Insurance, Banking and Financial Services, Manufacturing, Retail and Consumer Packaged Goods, Shipping and Logistics, Healthcare, and Utilities to re-imagine their digital future and transform their outcomes with operational excellence.We deliver an entire spectrum of BPM services in finance and accounting, procurement, customer interaction services and human resources leveraging collaborative models that are tailored to address the unique business challenges of each client. We co-create and execute the future vision of 400+ clients with the help of our 44,000+ employees. Job Description Min Exp - 5-8 years Location - PAN India Engagement & Project Overview An AI model trainer brings specialised knowledge in developing and fine-tuning machine learning models. They can ensure that your models are accurate, efficient, and tailored to your specific needs. Hiring an AI model trainer and tester can significantly enhance our data management and analytics capabilities Job Description Expertise in Model Development: Develop and fine-tune machine learning models. Ensure models are accurate, efficient, and tailored to our specific needs. Quality Assurance: Rigorously evaluate models to identify and rectify errors. Maintain the integrity of our data-driven decisions through high performance and reliability. Efficiency and Scalability: Streamline processes to reduce time-to-market. Scale AI initiatives and ML engineering skills effectively with dedicated model training and testing. Production ML Monitoring & MLOps: Implement and maintain model monitoring pipelines to detect data drift, concept drift, and model performance degradation. Set up alerting and logging systems using tools such as Evidently AI, WhyLabs/Prometheus + Grafana or cloud-native solutions (AWS SageMaker Monitor, GCP Vertex AI, Azure Monitor). Collaborate with teams to integrate monitoring into CI/CD pipelines, using platforms like Kubeflow, MLflow, Airflow, and Neptune.ai. Define and manage automated retraining triggers and model versioning strategies. Ensure observability and traceability across the ML lifecycle in production environments. Qualifications Qualifications: 5+ years of experience in the respective field. Proven experience in developing and fine-tuning machine learning models. Strong background in quality assurance and model testing. Ability to streamline processes and scale AI initiatives. Innovative mindset with a keen understanding of industry trends. License/Certification/Registration
Posted 2 weeks ago
4.0 years
0 Lacs
Greater Hyderabad Area
On-site
Area(s) of responsibility Full stack(JAVA+Reactjs) developerJD S.No Reactjs JD 1Strong proficiency in JavaScript, including DOM manipulation and the JavaScript object model 2.Thorough understanding of React.js and its core principles 3.Experience with HTML 5/ CSS3 4.Experience with popular React.js workflows (such as Flux or Redux) 5.Familiarity with newer specifications of EcmaScript 6.Experience with data structure libraries (e.g., Immutable.js) 7 7.Knowledge of isomorphic React is a plus 8.Familiarity with RESTful APIs 9.Knowledge of modern authorization mechanisms, such as JSON Web Token 10.Familiarity with modern front-end build pipelines and tools 11.Experience with common front-end development tools such as Babel, Webpack, NPM, etc. 12.Ability to understand business requirements and translate them into technical requirements 13.A knack for benchmarking and optimization 14.Familiarity with code versioning tools {{such as Git, SVN, and Mercurial}} Java developer JD 15.Designs, develops, and implements web-based Java applications to support business requirements. 16.Follows approved life cycle methodologies, creates design documents, and performs program coding and testing. 17.Resolves technical issues through debugging, research, and investigation. 18.Familiar with standard concepts, practices, and procedures within a particular field. 19.Relies on extensive experience and judgment to plan and accomplish goals. Performs a variety of tasks. 20.Ensure designs are in compliance with specifications 21.Strong knowledge in Java/J2EE. 22.Requires a bachelor's degree in area of specialty and 4-6years of experience in the field or in a related area .Skills with M/O flag are part of Specializatio nProgramming/Software Development -PL3 (Functional )Estimation & Scheduling -PL1 (Functional )Team Management -PL1 (Functional )Software Design -PL2 (Functional )Software Configuration -PL3 (Functional )Quality Assurance -PL1 (Functional )Help the tribe -PL2 (Behavioural )Stakeholder Relationship Management -PL1 (Functional )Requirements Definition And Management -PL1 (Functional )Think Holistically -PL2 (Behavioural )Knowledge Management -PL2 (Functional )Win the Customer -PL2 (Behavioural )One Birlasoft -PL2 (Behavioural )Results Matter -PL2 (Behavioural )Get Future Ready -PL2 (Behavioural )Test Execution -PL2 (Functional )MySQL - PL2 (Mandatory )Spring Boot - PL3 (Mandatory )Java - PL3 (Mandatory )Kubernetes - PL2 (Optional )REST API's - PL2 (Mandatory )TypeScript - PL2 (Optional )React JS - PL3 (Mandatory )RxJS - PL2 (Mandatory )JavaScript - PL3 (Mandatory)
Posted 2 weeks ago
55.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Choosing Capgemini means choosing a company where you will be empowered to shape your career in the way you’d like, where you’ll be supported and inspired by a collaborative community of colleagues around the world, and where you’ll be able to reimagine what’s possible. Join us and help the world’s leading organizations unlock the value of technology and build a more sustainable, more inclusive world. Your role We are looking for a technically skilled and detail-oriented Headless Content & Asset Delivery Engineer to build and maintain scalable content pipelines using Adobe Experience Manager (AEM) SaaS and Adobe Assets. This role will be instrumental in enabling modular, API-driven content delivery and real-time personalization across digital channels. In this role, you will play a key role in: Headless Content Pipeline Development Design and implement headless content delivery pipelines using AEM SaaS and Adobe Assets. Ensure content is structured for reuse, scalability, and performance across multiple endpoints. Component & Asset Architecture Develop and maintain modular CMS components and Digital Asset Management (DAM) structures. Establish best practices for metadata tagging, asset versioning, and content governance. Personalization Integration Integrate content delivery with personalization APIs to support contextual rendering based on user behavior and profile data. Collaborate with personalization and decisioning teams to align content logic with targeting strategies. Workflow Automation Automate content publication workflows, including metadata enrichment, asset delivery, and approval processes. Leverage AEM workflows and scripting to streamline operations and reduce manual effort. Your profile AEM as a Cloud Service (SaaS) Digital Asset Management (DAM) Personalization API Integration Workflow Automation CI/CD & DevOps for Content What You'll Love About Working Here You can shape your career with us. We offer a range of career paths and internal opportunities within Capgemini group. You will also get personalized career guidance from our leaders. You will get comprehensive wellness benefits including health checks, telemedicine, insurance with top-ups, elder care, partner coverage or new parent support via flexible work. At Capgemini, you can work on cutting-edge projects in tech and engineering with industry leaders or create solutions to overcome societal and environmental challenges. Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, generative AI, cloud and data, combined with its deep industry expertise and partner ecosystem.
Posted 2 weeks ago
6.0 years
0 Lacs
Delhi Cantonment, Delhi, India
On-site
What Makes Us a Great Place To Work We are proud to be consistently recognized as one of the world’s best places to work. We are currently the #1 ranked consulting firm on Glassdoor’s Best Places to Work list and have maintained a spot in the top four on Glassdoor’s list since its founding in 2009. Extraordinary teams are at the heart of our business strategy, but these don’t happen by chance. They require intentional focus on bringing together a broad set of backgrounds, cultures, experiences, perspectives, and skills in a supportive and inclusive work environment. We hire people with exceptional talent and create an environment in which every individual can thrive professionally and personally. Who You’ll Work With You’ll join our Application Engineering experts within the AI, Insights & Solutions team. This team is part of Bain’s digital capabilities practice, which includes experts in analytics, engineering, product management, and design. In this multidisciplinary environment, you'll leverage deep technical expertise with business acumen to help clients tackle their most transformative challenges. You’ll work on integrated teams alongside our general consultants and clients to develop data-driven strategies and innovative solutions. Together, we create human-centric solutions that harness the power of data and artificial intelligence to drive competitive advantage for our clients. Our collaborative and supportive work environment fosters creativity and continuous learning, enabling us to consistently deliver exceptional results. What You’ll Do Design, develop, and maintain cloud-based AI applications, leveraging a full-stack technology stack to deliver high-quality, scalable, and secure solutions. Collaborate with cross-functional teams, including product managers, data scientists, and other engineers, to define and implement analytics features and functionality that meet business requirements and user needs. Utilize Kubernetes and containerization technologies to deploy, manage, and scale analytics applications in cloud environments, ensuring optimal performance and availability. Develop and maintain APIs and microservices to expose analytics functionality to internal and external consumers, adhering to best practices for API design and documentation. Implement robust security measures to protect sensitive data and ensure compliance with data privacy regulations and organizational policies. Continuously monitor and troubleshoot application performance, identifying and resolving issues that impact system reliability, latency, and user experience. Participate in code reviews and contribute to the establishment and enforcement of coding standards and best practices to ensure high-quality, maintainable code. Stay current with emerging trends and technologies in cloud computing, data analytics, and software engineering, and proactively identify opportunities to enhance the capabilities of the analytics platform. Collaborate with DevOps and infrastructure teams to automate deployment and release processes, implement CI/CD pipelines, and optimize the development workflow for the analytics engineering team. Collaborate closely with and influence business consulting staff and leaders as part of multi-disciplinary teams to assess opportunities and develop analytics solutions for Bain clients across a variety of sectors. Influence, educate and directly support the analytics application engineering capabilities of our clients Travel is required (30%) ABOUT YOU Required Master’s degree in Computer Science, Engineering, or a related technical field. 6+ years at Senior or Staff level, or equivalent Experience with client-side technologies such as React, Angular, Vue.js, HTML and CSS Experience with server-side technologies such as, Django, Flask, Fast API Experience with cloud platforms and services (AWS, Azure, GCP) via Terraform Automation (good to have) 3+ years of Python expertise Use Git as your main tool for versioning and collaborating Experience with DevOps, CI/CD, Github Actions Demonstrated interest with LLMs, Prompt engineering, Langchain Experience with workflow orchestration - doesn’t matter if it’s dbt, Beam, Airflow, Luigy, Metaflow, Kubeflow, or any other Experience implementation of large-scale structured or unstructured databases, orchestration and container technologies such as Docker or Kubernetes Strong interpersonal and communication skills, including the ability to explain and discuss complex engineering technicalities with colleagues and clients from other disciplines at their level of cognition Curiosity, proactivity and critical thinking Strong computer science fundaments in data structures, algorithms, automated testing, object-oriented programming, performance complexity, and implications of computer architecture on software performance. Strong knowledge in designing API interfaces Knowledge of data architecture, database schema design and database scalability Agile development methodologies
Posted 2 weeks ago
2.0 - 4.0 years
0 Lacs
Surat, Gujarat, India
On-site
Job Description Role: PHP CodeIgniter Developer Responsibilities: Web Development: Develop, test, and deploy web applications using the PHP CodeIgniter framework. Database Integration: Integrate databases into web applications, ensuring efficient and optimized database queries. Front-end Development: Collaborate with front-end developers to integrate user-facing elements using server-side logic. API Development: Create and consume APIs to enable data exchange between different systems. Code Optimization: Optimize code for better performance and ensure constancy to coding standards. Security: Implement security to protect web applications . Performance Optimization: Optimize API performance by addressing speed and efficiency issues. Debugging: Identify and resolve technical issues, bugs, and errors in a timely manner. Collaboration: Work closely with designers, other developers, and PM to understand the project. Documentation: Create and maintain technical documentation for code, APIs, and other relevant aspects of the project. Key Performance Areas (KPAs): Web Development: Timely and successful delivery of fully functional and well-tested web applications using the PHP CodeIgniter framework. Database Integration: Smooth integration of databases into web applications, ensuring efficient and optimized queries. Front-end Development: Successful integration of front-end elements with server-side logic, contributing to a positive user experience. API Development: Effective development and utilization of APIs for logical data exchange between different systems. Code Optimization: Consistently optimized and well-documented code that constancy to coding standards, contributing to overall system performance. Security: Implement strong security measures by protecting against common vulnerabilities and regularly updating PHP, frameworks, and libraries. Performance Optimization: Continuous monitoring and improvement of website performance, including speed and efficiency. Troubleshooting: Quickly resolve technical issues, bugs, and errors in PHP CodeIgniter applications to minimize downtime and maintain smooth functionality. Collaboration: Effective collaboration with designers, other developers, and PM to understand project requirements and contribute to a collaborative development environment. Technical Documentation: Detailed and up-to-date technical documentation for PHP CodeIgniter code, APIs, and other relevant aspects of the project to help with understanding and future development efforts. Key Performance Indicators (KPIs): Time to develop and deploy web applications: Average time taken to complete a development project from inception to deployment using the PHP CodeIgniter framework. Accuracy and speed of PHP CodeIgniter configuration: Time taken to configure and customize PHP CodeIgniter settings, including database integration, for efficient performance. Implementations: Number of successful PHP CodeIgniter application implementations and modifications within a given timeframe. Database Integration: Performance metrics related to database queries in PHP CodeIgniter applications, such as query execution time and optimization improvements. Front-end Development: User satisfaction, page load times, and successful integration of designs into PHP CodeIgniter applications. API Development: Successful creation and consumption of APIs in PHP CodeIgniter applications. Code Optimization: Code review ratings, adherence to coding standards, and improvements in code performance for PHP CodeIgniter applications. Security Implementation: Number of identified vulnerabilities addressed, security audit results, and timely updates to PHP CodeIgniter core, frameworks, and libraries. Website Performance Improvement: Improvement in website speed, reduced page load times, and enhanced overall performance of PHP CodeIgniter applications. Troubleshooting: Average time taken to identify and resolve technical issues, bugs, and errors in PHP CodeIgniter applications. Collaboration Effectiveness: Feedback from team members, successful completion of collaborative projects, and keeping to project timelines in PHP CodeIgniter development. Quality of Documentation: Completeness and clarity of technical documentation, including code documentation, API documentation, and project-related materials for PHP CodeIgniter applications. Skills Requirements: Minimum 2 to 4 years of experience in PHP development using the CodeIgniter framework. Knowledge of front-end technologies including CSS3, JavaScript, and HTML5. Understanding of object-oriented PHP programming. Previous experience creating scalable and complex applications. Knowledge of user authentication and authorization, RestFull API Development. Experience with at least 1 payment gateway, including single-time purchases and subscription purchases. Good knowledge of Google Sheets APIs and Google Analytics. Experienced with code versioning tools including GitLab, GitHub, and Bitbucket. Familiarity with SQL databases like MySQL and MongoDB. Good problem-solving skills. Excellent project planning skills. Superb collaboration skills. Company Perks 05 Days Working Familiar Environment Flexible Timings Global Clients & Amazing Projects Leave Encashment Health Insurance Employee Engagement Activities Picnic
Posted 2 weeks ago
11.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
About Nasuni Nasuni is a leading high-growth hybrid cloud storage company that powers business growth with effortless scalability, built-in security, and fast edge performance using unique cloud-native architecture. Backed by Vista Equity Partners , we’ve built a powerful hybrid cloud file data platform trusted by 900+ global customers , including Dow, Boston Scientific, Western Digital , and Perkins + Will . Nasuni is headquartered in Boston, MA, USA with offices in Cork, Ireland and London, UK, and we are now opening an India Innovation Center in Hyderabad, India to leverage the exuberant level of IT talent available in India. With annual revenue at $160M and a 25% CAGR, Nasuni is reinventing enterprise file storage with patented innovation. Our hybrid work culture combines the flexibility of remote work with the benefits of in-person collaboration. Employees spend three days a week working from our Hyderabad office during core hours, and the remaining two days working remotely. Job Description: We are looking for a highly skilled Principal Software Engineer for full stack development. The ideal candidate will possess strong expertise in Python , Django or FastAPI , database management ( Postgres or MySQL ), API development, JavaScript (React) , Typescript , HTML development and application testing . As a principal software engineer, you will play a pivotal role in developing and maintaining our enterprise software, enabling remote file access and powering collaboration for Nasuni customers globally. Responsibilities: Act as a technical authority within the engineering team, leading the architecture and design of a file access, sharing, and collaboration platform used globally by thousands of users. Influence and lead technical direction of the organization. Successfully lead multi-developer feature teams and ensure the efficient development of high-quality solutions. Maintain effective collaboration with QA, Support, and Documentation groups. Represent the team in technical discussions and serve as a key technical contributor to major new features. Lead discussions with UI / UX / Product teams to guarantee that the user interface is intuitive, responsive, and visually appealing. Work with AWS technologies such as EC2, Aurora, Elastic cache, API Gateway, and Lambda. Collaborate with engineering management, product management and key stakeholders to understand requirements and translate them into technical specifications. Be recognized as an expert in 1 or more technical areas. Suggestion to move this bullet to the skills section. Respond to critical customer raised incidents in a timely manner, perform root cause analysis and implement preventative measures to avoid future incidents. Provide technical leadership to more junior engineers. Mentor, provide guidance on best practices and career development. Drive all team members to implement industry’s best practices for securing internet-facing applications. Lead efforts to continuously improve development processes, tools, and methodologies. Technical skills required: In-depth knowledge of full-stack development is essential. Ability to architect the solution(s) for critical/complex problems. Drive initiatives/innovation and build POC to demonstrate solutions to the leadership team. Suggestion to move this to the responsibilities section. Proficiency in programming languages using Python 3 , Python FastAPI , JavaScript (React) , Typescript . Strong knowledge of Linux , Git ( GitHub ), Docker (Containers), Jenkins , Postgres or MySQL databases are essential. Familiarity with CI/CD. In-depth knowledge building HTTP-based API ( RESTful or other types of APIs), including security, versioning, contracts and documentation. Strong expertise in cloud services, with a particular focus on Amazon Web Services (AWS). Knowledge of storage protocols like SMB and NFS is helpful. Prior experience working with enterprise file sync and share solutions will be an added advantage. Excellent problem solving and troubleshooting skills. Experience working in an agile development environment, and a solid understanding of agile methodologies. Strong communication and leadership skills, with the ability to mentor and inspire colleagues. Demonstratable experience testing and asserting the quality of the work you produce through writing unit, integration and smoke tests. Experience: BE/B.Tech, ME/M.Tech in computer science (or) electronics and communications (or) MCA. 11 to 15 years’ previous experience in the industry. At least 7+ years of experience in full-stack development. Nasuni is proud to be an equal opportunity employer. We are committed to fostering a diverse, inclusive, and respectful workplace where every team member can thrive. All qualified applicants will receive consideration for employment without regard to race, religion, caste, color, sex, gender identity or expression, sexual orientation, disability, age, national origin, or any other status protected by applicable laws in India or the country of employment. We celebrate individuality and are committed to building a workplace that reflects the diversity of the communities we serve.
Posted 2 weeks ago
10.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Title: Senior CV Engineer Location: Gurugram Experience: 6–10 Years CTC: Up to ₹ 60LPA Overview: We are hiring for our esteemed client, a Series-A funded deep-tech company building a first-of-its-kind app-based operating system for Computer Vision. The team specializes in real-time video/image inference, distributed processing, and high-throughput data handling using advanced technologies and frameworks. Key Responsibilities: Lead design and implementation of complex CV pipelines (object detection, instance segmentation, industrial anomaly detection). Own major modules from concept to deployment ensuring low latency and high reliability. Transition algorithms from Python/PyTorch to optimized C++ edge GPU implementations using TensorRT, ONNX, and GStreamer. Collaborate with cross-functional teams to refine technical strategies and roadmaps. Drive long-term data and model strategies (synthetic data generation, validation frameworks). Mentor engineers and maintain high engineering standards. Required Skills & Qualifications: 6–10 years of experience in architecting and deploying CV systems. Expertise in multi-object tracking, object detection, semi/unsupervised learning. Proficiency in Python, PyTorch/TensorFlow, Modern C++, CUDA. Experience with real-time, low-latency model deployment on edge devices. Strong systems-level design thinking across ML lifecycles. Familiarity with MLOps (CI/CD for models, versioning, experiment tracking). Bachelor’s/Master’s degree in CS, EE, or related fields with strong ML and algorithmic foundations. (Preferred) Experience with NVIDIA DeepStream, GStreamer, LLMs/VLMs, open-source contributions.
Posted 2 weeks ago
8.0 years
0 Lacs
Greater Hyderabad Area
Remote
Job Title:AI/ML Engineer / Data Scientist - with Databricks focus Experience: 8+Years Work type: Remote (India) Key Responsibilities: • Develop, deploy, and maintain scalable MLOps pipelines for both traditional ML and Generative AI use cases leveraging Databricks (Unity Catalog, Delta Tables, Inference Tables, Mosaic AI). • Operationalize large language models (LLMs) and other GenAI models, ensuring efficient prompt engineering, fine-tuning, and serving. • Implement model tracking, versioning, and experiment management using MLflow. • Build robust CI/CD pipelines for ML and GenAI workloads to automate testing, validation, and deployment to production. • Use Vertex AI to manage training, deployment, and monitoring of ML and GenAI models in the cloud. • Integrate high-quality, governed data pipelines that enable ML and Generative AI solutions with strong lineage and reproducibility. • Design and enforce AI Governance frameworks covering model explainability, bias monitoring, data access, compliance, and audit trails. • Collaborate with data scientists and GenAI teams to productionize prototypes and research into reliable, scalable products. • Monitor model performance, usage, and drift — including specific considerations for GenAI systems such as hallucination checks, prompt/response monitoring, and user feedback loops. • Stay current with best practices and emerging trends in MLOps and Generative AI. Key Qualifications: Must Have Skills: • 3+ years of experience in MLOps, ML Engineering, or related field. • Hands-on experience with operationalizing ML and Generative AI models in production. • Proficiency with Databricks (Unity Catalog, Delta Tables, Mosaic AI, Inference Tables). • Experience with MLflow for model tracking, registry, and reproducibility. • Strong understanding of Vertex AI pipelines and deployment services. • Expertise in CI/CD pipelines for ML and GenAI workloads (e.g., GitHub Actions, Azure DevOps, Jenkins). • Proven experience in integrating and managing data pipelines for AI, ensuring data quality, versioning, and lineage. • Solid understanding of AI Governance, model explainability, and responsible AI practices. • Proficiency in Python, SQL, and distributed computing frameworks. • Excellent communication and collaboration skills. Nice to Have: • Experience deploying and monitoring Large Language Models (LLMs) and prompt-driven AI workflows. • Familiarity with vector databases, embeddings, and retrieval-augmented generation (RAG) architectures. • Infrastructure-as-Code experience (Terraform, CloudFormation). • Experience working in regulated industries (e.g., finance, Retail) with compliance-heavy AI use cases.
Posted 2 weeks ago
0 years
0 Lacs
Pune, Maharashtra, India
Remote
Entity: Technology Job Family Group: IT&S Group Job Description: A results-oriented Senior Architect with a proven track record of delivering end-to-end cloud solutions across infrastructure, data, DevOps, and AI domains. Skilled in architecting, implementing, and governing secure, scalable, and high-performing Azure architectures that align with both technical requirements and business objectives. Brings deep expertise in Azure IaaS and PaaS services, DevOps automation using Azure DevOps, and AI integration through Azure OpenAI and Copilot Studio, enabling intelligent, modern, and future-ready enterprise solutions. Expertise spans Azure infrastructure management, CI/CD automation, Infrastructure as Code (ARM), Azure Data Factory (ADF) pipelines, and enterprise AI adoption. Demonstrated ability to build and support scalable, secure, and cost-optimized Azure environments aligned with governance and compliance standards. Strong background in SQL Server administration—handling deployment, upgrades (in-place and side-by-side), performance tuning, backup/restore strategies, high availability, and security hardening both on Azure VMs and PaaS SQL offerings. Experienced in migrating databases across environments using native tools, scripting, and automation workflows. Combines deep cloud expertise with solid development and scripting skills (PowerShell) to enable automation, integration, and operational excellence. Adept at collaborating with cross-functional teams, mentoring junior engineers, and aligning technical solutions with evolving business goals. Key Accountabilities Design and manage scalable, secure, and highly available Azure infrastructure environments. Implement and maintain Azure IaaS resources such as Virtual Machines, NSGs, Load Balancers, VNETs, VPN Gateways, and ExpressRoute. Perform cost optimization, monitoring, backup/recovery, patching, and capacity planning. Implement governance using Azure Policies, RBAC, and Management Groups. Design and configure Azure PaaS services like Azure App Services, Azure SQL, Azure Web Apps, Azure Functions, Storage Accounts, Key Vault, Logic Apps, and Ensure high availability and DR strategies for PaaS components. Design multi-tier, cloud-native application architectures on Azure.Troubleshoot PaaS performance and availability issues. Integrate Azure OpenAI capabilities into applications and business workflows Develop use cases such as chatbot assistants, intelligent search, summarization, document Q&A, etc. Leverage Copilot Studio to build and deploy enterprise AI copilots integrated with data sources. Ensure responsible AI and compliance alignment. Design and manage data pipelines and orchestrations in ADF for ETL/ELT processes. Integrate with Azure Data Lake, Azure SQL, Blob Storage, and on-prem data sources. Build and manage CI/CD pipelines using Azure DevOps Automate infrastructure deployment using ARM templates Configure and manage release gates, approvals, secrets, and environments. Implement Infrastructure as Code (IaC) and GitOps best practices. Implement identity management using Azure AD, MFA, Conditional Access. Manage secrets using Azure Key Vault, secure access via Managed Identities. Develop reusable and parameterized ARM templates modules for consistent deployments. Maintain template versioning using Git repositories. Use templates in Azure DevOps pipelines and automate deployment validations. Align templates with security and compliance baselines (e.g., Azure Landing Zones). Collaborate with architects, developers, data engineers, and security teams to design solutions. Lead technical discussions and present solutions to stakeholders. Mentor junior engineers and conduct code reviews. Stay updated with Azure roadmap, and guide on service adoption. Essential Education Bachelor's (or higher) degree from a recognized institute of higher learning, ideally focused in Computer Science, MIS/IT, or other STEM related subjects. Essential Experience And Job Requirements Technical capability: Primary Skills: Azure IaaS, PaaS & Core Services Azure OpenAI / Copilot Studio SQL Server & Azure Data Factory (ADF) Secondary Skills: Security & Governance Monitoring & Observability DevOps & CI/CD Business capability: Service Delivery & Management Domain expertise – Legal and Ethics & Compliance Leadership and EQ: For those in team leadership positions (whether activity or line management) Always getting the basics right, from quality development conversations to recognition and ongoing performance feedback. Has the ability to develop, coach, mentor and inspire others. Ensures team compliance with BP's Code of Conduct and demonstrates strong leadership of BP's Leadership Expectations and Values & Behaviours. Creates an environment where people are listening and speak openly about the good, the bad, and the ugly, so that everyone can understand and learn, so that everyone can understand and learn. All role holders Embraces a culture of change and agility, evolving continuously, adapting to our changing world. Effective team player looks beyond own area/organisational boundaries to consider the bigger picture and/or perspective of others. Is self-aware and actively seeks input from others on impact and effectiveness. Applies judgment and common sense – able to use insight and good judgement to enable commercially sound, efficient and pragmatic decisions and solutions and to respond to situations as they arise. Ensures personal compliance with BP's Code of Conduct and demonstrates strong leadership of BP's Leadership Expectations and Values & Behaviours. Cultural fluency – actively seeks to understand cultural differences and sensitivities. Travel Requirement No travel is expected with this role Relocation Assistance: This role is not eligible for relocation Remote Type: This position is a hybrid of office/remote working Skills: Agility core practices, Agility core practices, Analytics, API and platform design, Business Analysis, Cloud Platforms, Coaching, Communication, Configuration management and release, Continuous deployment and release, Data Structures and Algorithms (Inactive), Digital Project Management, Documentation and knowledge sharing, Facilitation, Information Security, iOS and Android development, Mentoring, Metrics definition and instrumentation, NoSql data modelling, Relational Data Modelling, Risk Management, Scripting, Service operations and resiliency, Software Design and Development, Source control and code management {+ 4 more} Legal Disclaimer: We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, socioeconomic status, neurodiversity/neurocognitive functioning, veteran status or disability status. Individuals with an accessibility need may request an adjustment/accommodation related to bp’s recruiting process (e.g., accessing the job application, completing required assessments, participating in telephone screenings or interviews, etc.). If you would like to request an adjustment/accommodation related to the recruitment process, please contact us. If you are selected for a position and depending upon your role, your employment may be contingent upon adherence to local policy. This may include pre-placement drug screening, medical review of physical fitness for the role, and background checks.
Posted 2 weeks ago
50.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Lead Fullstack Developer What Makes Us, Us Join some of the most innovative thinkers in FinTech as we lead the evolution of financial technology. If you are an innovative, curious, collaborative person who embraces challenges and wants to grow, learn and pursue outcomes with our prestigious financial clients, say Hello to SimCorp! At its foundation, SimCorp is guided by our values — caring, customer success-driven, collaborative, curious, and courageous. Our people-centered organization focuses on skills development, relationship building, and client success. We take pride in cultivating an environment where all team members can grow, feel heard, valued, and empowered. If you like what we’re saying, keep reading! WHY THIS ROLE IS IMPORTANT TO US This role is part of a team building an advanced, web-based enterprise software system. The focus is on developing rich user interfaces using Angular and Node.js, backed by robust C#/.NET services. The team operates in an Agile environment and is involved in projects that demand high performance, scalability, and integration with cloud-native services on Azure. Why This Role Matters This position is critical to delivering high-impact solutions that enhance operational efficiency and risk assessment accuracy especially in capital markets contexts. The software developed supports key business functions and is central to the company’s digital transformation and customer experience strategy. What Success Looks Like Delivering clean, scalable, and maintainable code using modern frameworks and DevOps practices. Collaborating effectively with cross-functional teams and stakeholders. Thoughtfully identifying and resolving technical challenges. Contributing to a culture of continuous improvement and innovation. Demonstrating ownership and flexibility in a challenging Agile environment. What You Will Be Responsible For Design & development of high-quality web-based enterprise software system. The focus of development will be towards building rich User Interfaces and backend integration Understanding business requirements and designing solution which meet requirement Production of technical design specifications, unit test plan documentation and execution Providing technical support (when necessary) to Production and UAT systems. Understanding how our applications operate, are structured/integrated/deployed Effectively communicate with all stakeholders Should be a good team player – wiling to work as part of team as well as individual contributor What We Value Most importantly, you can see yourself contributing and thriving in the position described above. How you gained the skills needed for doing that is less important. We expect you to be good at several of the following and be able to - and interested in - learning the rest. Practical experience with Angular/Node.js Working experience with javascript/Typescript HTML5 and CSS3 Develop, test, and maintain high-quality software applications using C# and the .NET framework. Apply object-oriented programming (OOP) principles to design and architect robust software solutions. Utilize various design and architectural patterns to produce scalable and maintainable code. Implement automated testing platforms and write unit tests to ensure the reliability and performance of the software. Collaborate with the development team to integrate code versioning tools such as Git into the workflow. Participate in continuous integration and continuous deployment (CI/CD) processes to ensure smooth and efficient software delivery. Work within an Azure environment to develop cloud-based applications and services. Debug and resolve technical issues, providing timely and effective solutions. Utilize common DevOps tools and technologies, such as PowerShell, ARM, Kubernetes, Helm, and Terraform. Utilize problem-solving techniques to quickly resolve issues, ensuring a positive customer experience. Stay up-to-date with the latest industry trends and best practices in software development. Must Have Skills UI- Angular, Node JS, Type Script, HTML C# Desired Skills Ag-grid, Graph QL Unit testing knowledge Exposure to Agile way of working Capital Markets domain knowledge Soft Skills Good written and verbal communication skills Ability to adopt and learn Benefits Competitive salary, bonus scheme, and pension are essential for any work agreement. However, at SimCorp, we believe we can offer more. Therefore, in addition to the traditional benefit scheme, we provide an excellent work-life balance: flexible work hours, a hybrid workplace model. On top of that, we have IP sprints; where you have 3 weeks per quarter to spend on mastering your skills as well as contributing to the company development. There is never just only one route - we practice an individual approach to professional development to support the direction you want to take. NEXT STEPS Please send us your application in English via our career site as soon as possible, we process incoming applications continually. Please note that only applications sent through our system will be processed. At SimCorp, we recognize that bias can unintentionally occur in the recruitment process. To uphold fairness and equal opportunities for all applicants, we kindly ask you to exclude personal data such as photo, age, or any non-professional information from your application. Thank you for aiding us in our endeavor to mitigate biases in our recruitment process. If you are interested in being a part of SimCorp but are not sure this role is suitable, submit your CV anyway. SimCorp is on an exciting growth journey, and our Talent Acquisition Team is ready to assist you discover the right role for you. The approximate time to consider your CV is three weeks. We are eager to continually improve our talent acquisition process and make everyone’s experience positive and valuable. Therefore, during the process we will ask you to provide your feedback, which is highly appreciated. Who We Are For over 50 years, we have worked closely with investment and asset managers to become the world’s leading provider of integrated investment management solutions. We are 3,000+ colleagues with a broad range of nationalities, educations, professional experiences, ages, and backgrounds in general. SimCorp is an independent subsidiary of the Deutsche Börse Group. Following the recent merger with Axioma, we leverage the combined strength of our brands to provide an industry-leading, full, front-to-back offering for our clients. SimCorp is an equal-opportunity employer. We are committed to building a culture where diverse perspectives and expertise are integrated into our everyday work. We believe in the continual growth and development of our employees, so that we can provide best-in-class solutions to our clients.
Posted 2 weeks ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Greetings HCL Tech is hiring for Servicenow developers Please find the below JD : Job Summary: We are seeking a highly skilled and forward-thinking Senior Software Developer to join our growing team. The successful candidate will have a strong foundation in ServiceNow development, JavaScript, DevOps, and UI/UX design, with added expertise in AI integration, unit testing, and performance optimization. This role includes leading design initiatives, writing clean, efficient, and scalable code, conducting code reviews, and contributing to automation and AI-powered solutions. Key Responsibilities: Develop and extend ServiceNow applications, including workflows, business rules, UI policies, and integrations. Lead the design and front-end development of intuitive and user-friendly interfaces using modern frameworks. Write efficient, performant, and secure code following industry best practices. Implement comprehensive unit tests and test automation to ensure code quality and reduce regression issues. Apply DevOps practices to streamline development pipelines using tools like Jenkins, Git, Docker, and CI/CD workflows. Conduct peer code reviews, mentor junior developers, and uphold high standards for software design and implementation. Collaborate with product, design, and business teams to translate requirements into technical solutions. Integrate and help build AI/ML-powered features, such as intelligent automation, recommendation engines, or conversational interfaces. Continuously monitor, analyze, and improve application performance and reliability. Stay current with technology trends and contribute to innovation and knowledge sharing. Required Qualifications: Bachelor’s or Master’s degree in Computer Science, Software Engineering, or a related field. 5+ years of professional experience in software development, including: 2+ years of hands-on ServiceNow development Strong proficiency in JavaScript and front-end frameworks (React, Vue, or similar) Experience building and maintaining unit tests using Jest, Mocha, Jasmine, or similar Demonstrated ability to write performant, optimized code Understanding of UI/UX design principles and experience creating accessible and responsive interfaces Familiarity with DevOps tools and practices (CI/CD, version control, containerization) Involvement in AI/ML projects or integrating AI APIs/services into applications Proven ability to lead system and application design discussions, enforce code quality, and perform structured code reviews Strong analytical and problem-solving skills Excellent verbal and written communication skills Preferred Qualifications: ServiceNow certifications (CSA, CAD, or other relevant modules) Experience with REST/SOAP APIs and enterprise integrations Knowledge of AI tools such as OpenAI, TensorFlow, or Azure Cognitive Services Agile/Scrum experience in a cross-functional team environment REQUIRED QUALIFICATIONS: Skills: Must have hands-on experience with production deployment and postproduction support. Must have Strong experience on various Middleware Integration technologies, Adapters, Queueing. Working experience on Microservices. Knowledge of various integration concepts including: Business-to-Business (B2B), platform-to-platform and EDI Trading Partner Integration development. Good exposure to Github, Subversion or other versioning tools. Must have experience in JAVA and spring. Experience with a large ESB implementation with any platform would be an added advantage. Good understanding of data formats like XML, JSON, EDI, CSV, NVP. Good understanding of integration technologies like HTTP, XML/XSLT, JMS, JDBC, REST, SOAP, Webservices and APIs. Must have strong knowledge of various middleware integration strategies. Strong analytical and problem-solving skills with excellent verbal and oral communication are mandatory. Strong organizational skills with the ability to multi-task, prioritize and execute on assigned deliverables. Ability to work effectively with minimal supervision and guidance. Good exposure to webservice/ API security. In-depth knowledge of applications code registration procedures Good working knowledge in Unix/Linux shell scripting Ability to identify system impact for small- and large-scale initiatives Ability to interact effectively at all levels with sensitivity to cultural diversity. Experience: 5+ years of experience in IT/Technology industry. 4+ years of experience in Webservices/Interfaces development, design and architecture. 3+ years of experience with Databases. 3+ years of experience of Java/J2EE development. 3 years of development experience B2B. 3+ years of experience with Middleware code migrations. Experience with change management tools and processes, including source code control, versioning, branching, defect tracking and release management.
Posted 2 weeks ago
6.0 - 11.0 years
5 - 10 Lacs
Hyderabad
Remote
Title: Senior Engineer Exp- 6+ yrs Job mode- Remote Job Type- C2C Budget- 5 to 10 LPA Job Description- Key Responsibilities: Work effectively as part of a project team alongside the Product Owner, Scrum master and other team members Prioritise a deep understanding of the importance and principles of engineering excellence and demonstrating this knowledge in your work Write clean code in line with the teams set standards. Look for ways to improve your team’s coding standards. Own, scope and deliver well defined deliverables or stories. Communicate and update your progress regularly at stand-ups or similar agile events and ceremonies. Deliver and maintain software products conforming to the agreed specifications and Enterprise quality standards & guardrails Support, monitor, and maintain production grade systems including utilising observability tooling and issue remediation. Collaborate closely and cooperatively with your technical and non-technical teams to work towards the best solution that maximises value to the customer Contribute to a culture of code quality and implement automated, unit and integration testing as part of the software development lifecycle. Apply good security processes such as threat modelling to the code you develop Grow your knowledge of architecture, modern engineering principles and design patterns Implement your team’s approach to delivering high quality, tested code. Maintain and improve CI/CD pipelines. Play a role in code reviews and actively review pull requests from other team members Produce software technical specifications and other documentation as required for development solutions Maintain good working relationships with colleagues, vendors and customers of the department Skills and Experience: Experience (required)in building API products and API management e.g. Apigee. Including API versioning, documentation, and developer onboarding experience Experience (required) in AWS Serverless solution design & Event Driven integration patterns Experience (required) of working in the development of AWS cloud native solutions. . Experience of working with DevOps tools such as Jenkins, Bamboo, Git, or similar, for deployment purposes . Experience of various database paradigms including SQL & NoSQL . Solid understanding of security protocols and standards . Experience (required) with backend / compute languages delivering business value such as Typescript . Experience in Automated Testing principles Deep understanding of the importance and principles of engineering excellence and demonstrating this knowledge in your work. Experience of feature or function design and delivery as part of an agile software development team (Scrum, Kanban, XP, etc.). Experience of working with Product Owners, customers, end-users, or stakeholders in the delivery of software, solutions, or products. Skills and Experience (desirable) Have experience in integration design, development & delivery. Have experience in Infrastructure as Code (AWS CDK (ideally), Terraform etc) Have experience in supporting, monitoring, and maintaining production grade systems: . Investigation via observability tooling e.g. Splunk, Datadog, AWS tooling.
Posted 2 weeks ago
6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Title: DevOps/MLOps Expert Location: Gurugram (On-Site) Employment Type: Full-Time Experience: 6 + years Qualification: B.Tech CSE About the Role We are seeking a highly skilled DevOps/MLOps Expert to join our rapidly growing AI-based startup building and deploying cutting-edge enterprise AI/ML solutions. This is a critical role that will shape our infrastructure, deployment pipelines, and scale our ML operations to serve large-scale enterprise clients. As our DevOps/MLOps Expert, you will be responsible for bridging the gap between our AI/ML development teams and production systems, ensuring seamless deployment, monitoring, and scaling of our ML-powered enterprise applications. You’ll work at the intersection of DevOps, Machine Learning, and Data Engineering in a fast-paced startup environment with enterprise-grade requirements. Key Responsibilities MLOps & Model Deployment • Design, implement, and maintain end-to-end ML pipelines from model development to production deployment • Build automated CI/CD pipelines specifically for ML models using tools like MLflow, Kubeflow, and custom solutions • Implement model versioning, experiment tracking, and model registry systems • Monitor model performance, detect drift, and implement automated retraining pipelines • Manage feature stores and data pipelines for real-time and batch inference • Build scalable ML infrastructure for high-volume data processing and analytics Enterprise Cloud Infrastructure & DevOps • Architect and manage cloud-native infrastructure with focus on scalability, security, and compliance • Implement Infrastructure as Code (IaC) using Terraform, CloudFormation, or Pulumi • Design and maintain Kubernetes clusters for containerized ML workloads • Build and optimize Docker containers for ML applications and microservices • Implement comprehensive monitoring, logging, and alerting systems • Manage secrets, security, and enterprise compliance requirements Data Engineering & Real-time Processing • Build and maintain large-scale data pipelines using Apache Airflow, Prefect, or similar tools • Implement real-time data processing and streaming architectures • Design data storage solutions for structured and unstructured data at scale • Implement data validation, quality checks, and lineage tracking • Manage data security, privacy, and enterprise compliance requirements • Optimize data processing for performance and cost efficiency Enterprise Platform Operations • Ensure high availability (99.9%+) and performance of enterprise-grade platforms • Implement auto-scaling solutions for variable ML workloads • Manage multi-tenant architecture and data isolation • Optimize resource utilization and cost management across environments • Implement disaster recovery and backup strategies • Build 24x7 monitoring and alerting systems for mission-critical applications Required Qualifications Experience & Education • 4-8 years of experience in DevOps/MLOps with at least 2+ years focused on enterprise ML systems • Bachelor’s/Master’s degree in Computer Science, Engineering, or related technical field • Proven experience with enterprise-grade platforms or large-scale SaaS applications • Experience with high-compliance environments and enterprise security requirements • Strong background in data-intensive applications and real-time processing systems Technical Skills Core MLOps Technologies • ML Frameworks: TensorFlow, PyTorch, Scikit-learn, Keras, XGBoost • MLOps Tools: MLflow, Kubeflow, Metaflow, DVC, Weights & Biases • Model Serving: TensorFlow Serving, PyTorch TorchServe, Seldon Core, KFServing • Experiment Tracking: MLflow, Neptune.ai, Weights & Biases, Comet DevOps & Cloud Technologies • Cloud Platforms: AWS, Azure, or GCP with relevant certifications • Containerization: Docker, Kubernetes (CKA/CKAD preferred) • CI/CD: Jenkins, GitLab CI, GitHub Actions, CircleCI • IaC: Terraform, CloudFormation, Pulumi, Ansible • Monitoring: Prometheus, Grafana, ELK Stack, Datadog, New Relic Programming & Scripting • Python (advanced) - primary language for ML operations and automation • Bash/Shell scripting for automation and system administration • YAML/JSON for configuration management and APIs • SQL for data operations and analytics • Basic understanding of Go or Java (advantage) Data Technologies • Data Pipeline Tools: Apache Airflow, Prefect, Dagster, Apache NiFi • Streaming & Real-time: Apache Kafka, Apache Spark, Apache Flink, Redis • Databases: PostgreSQL, MongoDB, Elasticsearch, ClickHouse • Data Warehousing: Snowflake, BigQuery, Redshift, Databricks • Data Versioning: DVC, LakeFS, Pachyderm Preferred Qualifications Advanced Technical Skills • Enterprise Security: Experience with enterprise security frameworks, compliance (SOC2, ISO27001) • High-scale Processing: Experience with petabyte-scale data processing and real-time analytics • Performance Optimization: Advanced system optimization, distributed computing, caching strategies • API Development: REST/GraphQL APIs, microservices architecture, API gateways Enterprise & Domain Experience • Previous experience with enterprise clients or B2B SaaS platforms • Experience with compliance-heavy industries (finance, healthcare, government) • Understanding of data privacy regulations (GDPR, SOX, HIPAA) • Experience with multi-tenant enterprise architectures Leadership & Collaboration • Experience mentoring junior engineers and technical team leadership • Strong collaboration with data science teams, product managers, and enterprise clients • Experience with agile methodologies and enterprise project management • Understanding of business metrics, SLAs, and enterprise ROI Growth Opportunities • Career Path: Clear progression to Lead DevOps Engineer or Head of Infrastructure • Technical Growth: Work with cutting-edge enterprise AI/ML technologies • Leadership: Opportunity to build and lead the DevOps/Infrastructure team • Industry Exposure: Work with Government & MNCs enterprise clients and cutting-edge technology stacks Success Metrics & KPIs Technical KPIs • System Uptime: Maintain 99.9%+ availability for enterprise clients • Deployment Frequency: Enable daily deployments with zero downtime • Performance: Ensure optimal response times and system performance • Cost Optimization: Achieve 20-30% annual infrastructure cost reduction • Security: Zero security incidents and full compliance adherence Business Impact • Time to Market: Reduce deployment cycles and improve development velocity • Client Satisfaction: Maintain 95%+ enterprise client satisfaction scores • Team Productivity: Improve engineering team efficiency by 40%+ • Scalability: Support rapid client base growth without infrastructure constraints Why Join Us Be part of a forward-thinking, innovation-driven company with a strong engineering culture. Influence high-impact architectural decisions that shape mission-critical systems. Work with cutting-edge technologies and a passionate team of professionals. Competitive compensation, flexible working environment, and continuous learning opportunities. How to Apply Please submit your resume and a cover letter outlining your relevant experience and how you can contribute to Aaizel Tech Labs’ success. Send your application to hr@aaizeltech.com , bhavik@aaizeltech.com or anju@aaizeltech.com.
Posted 2 weeks ago
4.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Type Full-time Description Why Birdeye? Birdeye is the highest-rated reputation, social media, and customer experience platform for local businesses and brands. Over 150,000 businesses use Birdeye’s AI-powered platform to effortlessly manage online reputation, connect with prospects through social media and digital channels, and gain customer experience insights to grow sales and thrive. At Birdeye, innovation isn't just a goal – it's our driving force. Our commitment to pushing boundaries and redefining industry standards has earned us accolades as one of the foremost providers of AI, Reputation Management, Social Media, and Customer Experience software by G2. Founded in 2012 and headquartered in Palo Alto, Birdeye is led by a team of industry experts and innovators from Google, Amazon, Salesforce, and Yahoo. Birdeye is backed by the who’s who of Silicon Valley - Salesforce founder Marc Benioff, Yahoo co-founder Jerry Yang, Trinity Ventures, World Innovation Lab, and Accel-KKR. Responsibilities Develop new user-facing features Gain feedback continuously from users, customers, and colleagues Write reusable code and libraries (with matching documentation) to a standard which makes it quick and easy to maintain the code in the future Optimize application for maximum speed and scalability Collaborate with other team members and stakeholders Requirements Hands-on experience in Javascript, HTML, CSS. Hands-on experience in React JS & Redux , Web pack, ES6. Good understanding of asynchronous request handling, partial page updates, and AJAX Knowledge in Node JS Proficient understanding of code versioning tools 4 years or more total experience Why You'll Join Us: At Birdeye, we are relentless innovators driven by a singular goal: to lead our category with unparalleled excellence. We don't just set goals – we surpass them. We're a team of doers who roll up our sleeves and get the job done, delivering on our promises with unwavering dedication. Working here means embracing a culture of action and accountability, where every person is empowered to make an impact. We don't just talk about making a difference – we make it happen.
Posted 2 weeks ago
2.0 - 5.0 years
3 - 8 Lacs
Mohali
On-site
The Role- As an AI Engineer , you will be responsible for building and optimizing AI-first solutions that power BotPenguin’s conversational and Agentic capabilities. You will work on LLM integrations, NLP pipelines, and machine learning models, while collaborating with cross-functional teams to deliver intelligent experiences at scale. This is a high-impact role that combines engineering, research, and deployment skills to solve real-world problems using artificial intelligence. What you need for this role- Education: Bachelor's or Master’s degree in Computer Science, Artificial Intelligence, Machine Learning, or a related discipline. Experience: 2–5 years of experience working in AI/ML or related software engineering roles. Technical Skills: Strong proficiency in Python and libraries such as scikit-learn, PyTorch, TensorFlow, Transformers (Hugging Face). Hands-on experience with LLMs (OpenAI, Claude, LLaMA) and building AI agents using API integrations. Experience working with NLP tasks (intent classification, text generation, embeddings, summarization). Familiarity with Vector Databases like Pinecone, FAISS, Elastic Vector DB. Understanding of Prompt Engineering, RAG (Retrieval-Augmented Generation), and embedding generation. Proficiency in building and deploying ML models via Docker/Kubernetes or cloud services like AWS/GCP. Experience with version control systems (GitLab/GitHub) and working in Agile teams. Soft Skills: Strong analytical thinking and problem-solving capabilities. Passion for research, innovation, and applying AI to real-world use-cases. Excellent communication skills and the ability to collaborate across departments. Attention to detail with a focus on model accuracy, explainability, and performance. What you will be doing- Design, build, and optimize AI-powered chatbot features and virtual agents using state-of-the-art models. Collaborate with the Product, Backend, and UI teams to integrate intelligent workflows into the BotPenguin platform. Build, evaluate, and fine-tune language models and NLP components tailored to user use-cases. Implement context-aware chat solutions using embeddings, vector stores, and retrieval mechanisms. Create internal tools for prompt testing, versioning, and debugging AI responses. Monitor model performance metrics such as latency, hallucination rate, and user satisfaction. Explore research papers, open-source innovations, and contribute to rapid experimentation. Write clean, modular, and testable code along with clear documentation for future scalability. Any other development related tasks as required for BotPenguin. Guiding, reviewing the code written by junior members in the team. Top reasons to work with us- Be part of a cutting-edge AI startup driving innovation in chatbot automation. Work with a passionate and talented team that values knowledge-sharing and problem-solving. Growth-oriented environment with ample learning opportunities. Exposure to top-tier global clients and projects with real-world impact. Flexible work hours and an emphasis on work-life balance. A culture that fosters creativity, ownership, and collaboration. Job Type: Full-time Pay: ₹300,000.00 - ₹800,000.00 per year Benefits: Flexible schedule Health insurance Provident Fund Ability to commute/relocate: Mohali, Punjab: Reliably commute or planning to relocate before starting work (Required) Experience: AI: 2 years (Required) Work Location: In person
Posted 2 weeks ago
5.0 - 10.0 years
5 - 7 Lacs
Thiruvananthapuram
On-site
5 - 10 Years 1 Opening Trivandrum Role description Experience: 5 to 9 years Location: Any UST location ) Job Type: Full-Time Mandatory Skills: RESTful APIs: Strong experience designing, developing, and consuming RESTful web services. Proficiency in API security, versioning, and documentation (e.g., Swagger/OpenAPI). Data Privacy & Compliance Tools: Hands-on knowledge of tools and practices for ensuring GDPR, CCPA, HIPAA, or other relevant compliance. Familiarity with data classification, encryption standards, and audit frameworks. Cloud Platforms: Microsoft Azure: Solid understanding of Azure services including compute, storage, networking, and security. Alibaba Cloud: Experience with deployment and management of services on Alibaba Cloud. Robotic Process Automation (RPA) Tools: Proficiency with RPA tools such as UiPath, Automation Anywhere, or Blue Prism. Experience in designing, developing, and deploying bots for automating business processes. SQL & Data Handling: Strong command of SQL (queries, joins, stored procedures). Familiarity with data warehousing and ETL tools is a plus. Key Responsibilities: Design and implement scalable backend services and APIs. Ensure application architecture aligns with data privacy and compliance standards. Work on cross-cloud deployments and integrations (Azure & Alibaba Cloud). Develop and manage automation workflows using RPA tools. Collaborate with cross-functional teams to ensure quality and timely delivery. Skills Restful Apis,DataPrivacy & ComplianceTools,Azure & AlibabaCloud,RoboticProcessAutomationTools About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.
Posted 2 weeks ago
7.0 - 12.0 years
22 - 25 Lacs
India
On-site
TECHNICAL ARCHITECT Key Responsibilities 1. Designing technology systems: Plan and design the structure of technology solutions, and work with design and development teams to assist with the process. 2. Communicating: Communicate system requirements to software development teams, and explain plans to developers and designers. They also communicate the value of a solution to stakeholders and clients. 3. Managing Stakeholders: Work with clients and stakeholders to understand their vision for the systems. Should also manage stakeholder expectations. 4. Architectural Oversight: Develop and implement robust architectures for AI/ML and data science solutions, ensuring scalability, security, and performance. Oversee architecture for data-driven web applications and data science projects, providing guidance on best practices in data processing, model deployment, and end-to-end workflows. 5. Problem Solving: Identify and troubleshoot technical problems in existing or new systems. Assist with solving technical problems when they arise. 6. Ensuring Quality: Ensure if systems meet security and quality standards. Monitor systems to ensure they meet both user needs and business goals. 7. Project management: Break down project requirements into manageable pieces of work, and organise the workloads of technical teams. 8. Tool & Framework Expertise: Utilise relevant tools and technologies, including but not limited to LLMs, TensorFlow, PyTorch, Apache Spark, cloud platforms (AWS, Azure, GCP), Web App development frameworks and DevOps practices. 9. Continuous Improvement: Stay current on emerging technologies and methods in AI, ML, data science, and web applications, bringing insights back to the team to foster continuous improvement. Technical Skills 1. Proficiency in AI/ML frameworks such as TensorFlow, PyTorch, Keras, and scikit-learn for developing machine learning and deep learning models. 2. Knowledge or experience working with self-hosted or managed LLMs. 3. Knowledge or experience with NLP tools and libraries (e.g., SpaCy, NLTK, Hugging Face Transformers) and familiarity with Computer Vision frameworks like OpenCV and related libraries for image processing and object recognition. 4. Experience or knowledge in back-end frameworks (e.g., Django, Spring Boot, Node.js, Express etc.) and building RESTful and GraphQL APIs. 5. Familiarity with microservices, serverless, and event-driven architectures. Strong understanding of design patterns (e.g., Factory, Singleton, Observer) to ensure code scalability and reusability. 6. Proficiency in modern front-end frameworks such as React, Angular, or Vue.js, with an understanding of responsive design, UX/UI principles, and state management (e.g., Redux) 7. In-depth knowledge of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB, Cassandra), as well as caching solutions (e.g., Redis, Memcached). 8. Expertise in tools such as Apache Spark, Hadoop, Pandas, and Dask for large-scale data processing. 9. Understanding of data warehouses and ETL tools (e.g., Snowflake, BigQuery, Redshift, Airflow) to manage large datasets. 10. Familiarity with visualisation tools (e.g., Tableau, Power BI, Plotly) for building dashboards and conveying insights. 11. Knowledge of deploying models with TensorFlow Serving, Flask, FastAPI, or cloud-native services (e.g., AWS SageMaker, Google AI Platform). 12. Familiarity with MLOps tools and practices for versioning, monitoring, and scaling models (e.g., MLflow, Kubeflow, TFX). 13. Knowledge or experience in CI/CD, IaC and Cloud Native toolchains. 14. Understanding of security principles, including firewalls, VPC, IAM, and TLS/SSL for secure communication. 15. Knowledge of API Gateway, service mesh (e.g., Istio), and NGINX for API security, rate limiting, and traffic management. Experience Required Technical Architect with 7 - 12 years of experience Salary 22-25 LPA Job Types: Full-time, Permanent Pay: ₹2,200,000.00 - ₹2,500,000.00 per year Work Location: In person
Posted 2 weeks ago
10.0 - 15.0 years
5 - 10 Lacs
Gurgaon
On-site
Senior Manager EXL/SM/1425510 ServicesGurgaon Posted On 17 Jul 2025 End Date 31 Aug 2025 Required Experience 10 - 15 Years Basic Section Number Of Positions 2 Band C2 Band Name Senior Manager Cost Code 0000000 Campus/Non Campus NON CAMPUS Employment Type Permanent Requisition Type New Max CTC 1500000.0000 - 2500000.0000 Complexity Level Not Applicable Work Type Hybrid – Working Partly From Home And Partly From Office Organisational Group Analytics Sub Group Analytics - UK & Europe Organization Services LOB Analytics - UK & Europe SBU Analytics Country India City Gurgaon Center EXL - Gurgaon Center 38 Skills Skill ARTIFICIAL INTELLIGENCE Minimum Qualification B.COM Certification No data available Job Description Band- C1/C2 The Feature Engineers will be focused on: Designing and implementing production-grade features from enterprise-scale data Working closely with product teams and data scientists to translate hypotheses into reusable features Building, testing, and versioning features with appropriate lineage and documentation Supporting reuse across teams, with an emphasis on stability and performance Contributing to the evolution of the feature store and feature library architecture Workflow Workflow Type Back Office
Posted 2 weeks ago
1.0 - 3.0 years
2 - 7 Lacs
Gurgaon
On-site
Donaldson is committed to solving the world’s most complex filtration challenges. Together, we make cool things. As an established technology and innovation leader, we are continuously evolving to meet the filtration needs of our changing world. Join a culture of collaboration and innovation that matters and a chance to learn, effect change, and make meaningful contributions at work and in communities. We are seeking a skilled and motivated Data Engineer II to join the Corporate Technology Data Engineering Team. This role is important for developing and sustaining our data infrastructure, which supports a wide range of R&D, sensor-based, and modeling technologies. The Data Engineer II will design and maintain pipelines that enable the use of complex datasets. This position directly empowers faster decision making by building trustworthy data flows and access for engineers and scientists. Primary Role Responsibilities: Develop and maintain data ingestion and transformation pipelines across on-premise and cloud platforms. Develop scalable ETL/ELT pipelines that integrate data from a variety of sources (i.e. form-based entries, SQL databases, Snowflake, SharePoint). Collaborate with data scientists, data analysts, simulation engineers and IT personnel to deliver data engineering and predictive data analytics projects. Implement data quality checks, logging, and monitoring to ensure reliable operations. Follow and maintain data versioning, schema evolution, and governance controls and guidelines. Help administer Snowflake environments for cloud analytics. Work with more senior staff to improve solution architectures and automation. Stay updated with the latest data engineering technologies and trends. Participate in code reviews and knowledge sharing sessions. Participate in and plan new data projects that impact business and technical domains. Required Qualifications & Relevant Experience: Bachelor’s or master’s degree in computer science, data engineering, or related field. 1-3 years of experience in data engineering, ETL/ELT development, and/or backend software engineering. Demonstrated expertise in Python and SQL. Demonstrated experience working with data lakes and/or data warehouses (e.g. Snowflake, Databricks, or similar) Familiarity with source control and development practices (e.g Git, Azure DevOps) Strong problem-solving skills and eagerness to work with cross-functional globalized teams. Preferred Qualifications: Required qualification plus Working experience and knowledge of scientific and R&D workflows, including simulation data and LIMS systems. Demonstrated ability to balance operational support and longer-term project contributions. Experience with Java Strong communication and presentation skills. Motivated and self-driven learner Employment opportunities for positions in the United States may require use of information which is subject to the export control regulations of the United States. Hiring decisions for such positions are required by law to be made in compliance with these regulations. Applicants for employment opportunities in other countries must be able to meet the comparable export control requirements of that country and of the United States. Donaldson Company has been made aware that there are several recruiting scams that are targeting job seekers. These scams have attempted to solicit money for job applications and/or collect confidential information, Donaldson will never solicit money during the application or recruiting process. Donaldson only accepts online applications through our Careers | Donaldson Company, Inc. website and any communication from a Donaldson recruiter would be sent using a donaldson.com email address. If you have any questions about the legitimacy of an employment opportunity, please reach out to talentacquisition@donaldson.com to verify that the communication is from Donaldson. Our policy is to provide equal employment opportunities to all qualified persons without regard to race, gender, color, disability, national origin, age, religion, union affiliation, sexual orientation, veteran status, citizenship, gender identity and/or expression, or other status protected by law.
Posted 2 weeks ago
0.0 - 5.0 years
0 - 0 Lacs
Surat, Gujarat
On-site
Job Title: Senior Android Developer Location: Surat, Gujarat Company: Brainfleck Employment Type: Full-Time Experience Required: 4+ Years About the Role: We are seeking a highly skilled Senior Android Developer to join our growing team. The ideal candidate will be responsible for designing and building advanced applications for the Android platform. You should be passionate about pushing mobile technologies to the limits and working with our team of talented engineers to design and build the next generation of mobile applications. Key Responsibilities: Develop and maintain advanced Android applications. Collaborate with cross-functional teams to define, design, and ship new features. Work with outside data sources and APIs (REST, JSON, GraphQL). Unit-test code for robustness, including edge cases, usability, and general reliability. Fix bugs and improve application performance. Continuously discover, evaluate, and implement new technologies to maximize development efficiency. Mentor junior developers and review their code for quality assurance. Required Skills & Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Minimum of 3 to 5 years of experience in Android development. Strong knowledge of Android SDK, different versions of Android, and how to deal with different screen sizes. Proficient in Kotlin and Java. Experience with third-party libraries and APIs. Solid understanding of the full mobile development life cycle. Familiarity with RESTful APIs to connect Android applications to back-end services. Experience with Android UI design principles, patterns, and best practices. Experience with offline storage, threading, and performance tuning. Knowledge of Google’s Android design principles and interface guidelines. Familiarity with cloud message APIs and push notifications. Proficient understanding of code versioning tools, such as Git. Experience with tools like Android Studio, Firebase, and Android Jetpack components. Preferred Skills: Experience with CI/CD tools for Android. Knowledge of modern architectural patterns like MVVM, MVI, or MVP. Familiarity with Jetpack Compose and modern Android UI development. Experience in publishing apps on the Play Store. Job Type: Full-time Pay: ₹15,000.00 - ₹25,000.00 per month
Posted 2 weeks ago
5.0 years
0 Lacs
India
On-site
Job Summary We are looking for a skilled MuleSoft DevOps Engineer with at least 5 years of hands-on experience in managing and automating MuleSoft deployments and environments. The ideal candidate will have a strong DevOps background, expertise in CI/CD pipelines, and experience with Anypoint Platform, Runtime Fabric, and API lifecycle management. Key Responsibilties Design, implement, and maintain CI/CD pipelines for MuleSoft applications using tools like Jenkins, GitLab CI, or Azure DevOps. Manage MuleSoft application deployments across various environments (Dev, QA, UAT, Prod) using Runtime Manager and Runtime Fabric. Automate infrastructure provisioning using tools such as Terraform, Ansible, or CloudFormation. Monitor application performance and availability; implement logging and alerting strategies (Splunk, ELK, CloudWatch, etc.). Collaborate with development and integration teams to ensure seamless deployments and environment readiness. Implement API versioning, promotion, and governance through the MuleSoft Anypoint Platform. Ensure security, compliance, and audit controls are followed across MuleSoft deployments. Maintain and manage source code repositories (e.g., Git, Bitbucket) and branching strategies. Troubleshoot deployment issues and provide root cause analysis. Document DevOps processes, deployment guides, and automation scripts. Skills & Qualifications 5+ years of experience in DevOps with a strong focus on MuleSoft platforms. Solid hands-on experience with MuleSoft Anypoint Platform, Runtime Manager, and/or Runtime Fabric. Strong knowledge of CI/CD tools like Jenkins, GitLab CI, Azure DevOps, or similar. Proficiency with Infrastructure as Code (IaC) using tools like Terraform or Ansible. Experience working with cloud platforms (AWS, Azure, or GCP). Familiarity with API security, policies, and API lifecycle management. Knowledge of Docker and container orchestration (Kubernetes is a plus). Strong scripting skills (Shell, Bash, Python, Groovy). Familiar with monitoring/logging tools like Splunk, ELK, AppDynamics, or Prometheus.
Posted 2 weeks ago
4.0 years
20 Lacs
Indore
On-site
Position: Sr. Dot NET Developer Location: Indore, MP Experience: 4 to 10 Years Key skills: ASP.NET, C#, MVC, jQuery, SQL Server, Entity Framework, Web API, .NET Core Technical Skills: ● Agile development methodology experience is a must ● Strong proficiency in MVC ● 4+ years of experience with ASP.NET MVC and above, .NET Framework ● 4+ years of experience developing applications using Entity Framework ● Working experience in integrating Web APIservices, failure analysis, etc. ● Strong knowledge of application design, development, testing, and implementation methods and techniques ● Database management (MS SQL Server 2012 and above) ● Proficient understanding of code versioning tools(SVN/Git) ● Experience creating reusable structures, algorithms, and implementing design patterns ● Good understanding of Design Patterns &Software Development Life Cycle. Soft Skills ● Good communication, interpersonal and analytical skills ● Team player with strong leadership qualities ● Strong time-management skills ● Critical thinking and problem-solving skills ● Team player with good time-management skills ● Self-starter and Self-motivated Job Type: Full-time Pay: Up to ₹2,000,000.00 per year Location Type: In-person Schedule: Fixed shift Monday to Friday Application Question(s): What is your current ctc? What is your expected ctc? Are you comfortable to relocate to Indore? What is your total experience with DotNet technology? Work Location: In person
Posted 2 weeks ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description Objectives of this role Design and develop efficient computer vision applications for security and surveillance domain. Develop computer vision applications and algorithms for deploying on low power embedded devices. Collaborate with firmware engineers, front-end engineers, QA Engineers and architects on production systems and applications. Identify differences in data distribution that could potentially affect model performance in real-world applications Ensure algorithms generate accurate predictions. Stay up to date with developments in the machine learning industry. Do Data versioning as well as model versioning of the collected data and developed models. Responsibilities Skills and qualifications Extensive math and computer skills, with a deep understanding of probability, statistics, and algorithms. Familiarity with deploying deep learning models on low power embedded devices. Good knowledge of programming with C and C++ is must. Proven record of working with AI Accelerators, NPU and quantization frameworks like OpenVINO or Neuralmagic. In-depth knowledge of TF or PyTorch. Familiarity of ArmNN, Kendryte NNcase, Maix Sipeed or RKNN toolkits. Good knowledge of version control systems like Git, Azure Repos. Familiarity with data structures, data modeling, and software architecture. Impeccable analytical and problem-solving skills Qualifications Preferred qualifications Proven experience as a machine learning engineer or similar role Bachelor’s degree (or equivalent) in computer science, mathematics, or related field About Us Honeywell helps organizations solve the world's most complex challenges in automation, the future of aviation and energy transition. As a trusted partner, we provide actionable solutions and innovation through our Aerospace Technologies, Building Automation, Energy and Sustainability Solutions, and Industrial Automation business segments – powered by our Honeywell Forge software – that help make the world smarter, safer and more sustainable.
Posted 2 weeks ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Hi... We are looking for Data Scientist with Artificail Intelligence & machine Learning Work Location: Only Hyderabad Exp Range: 4 to 8 Yrs Design and build intelligent agents ( RAG , task agents, decision bots) for use in credit, customer service, or analytics workflows. – Finance Domain Deploy and manage AI models in production using AWS AI/ML services (SageMaker, Lambda, Bedrock, etc.). Work with Python and SQL to preprocess, transform, and analyze large volumes of structured and semi-structured data. Collaborate with data scientists, data engineers, and business stakeholders to convert ML prototypes into scalable services. Automate the lifecycle of AI/ML solutions using MLOps practices (model versioning, CI/CD, model monitoring). Leverage vector databases (like Pinecone or OpenSearch) and foundation models to build conversational or retrieval-based solutions. Ensure proper governance, logging, and testing of AI solutions in line with RBI and internal guidelines.
Posted 2 weeks ago
4.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Type Full-time Description Why Birdeye? Birdeye is the highest-rated reputation, social media, and customer experience platform for local businesses and brands. Over 150,000 businesses use Birdeye’s AI-powered platform to effortlessly manage online reputation, connect with prospects through social media and digital channels, and gain customer experience insights to grow sales and thrive. At Birdeye, innovation isn't just a goal – it's our driving force. Our commitment to pushing boundaries and redefining industry standards has earned us accolades as one of the foremost providers of AI, Reputation Management, Social Media, and Customer Experience software by G2. Founded in 2012 and headquartered in Palo Alto, Birdeye is led by a team of industry experts and innovators from Google, Amazon, Salesforce, and Yahoo. Birdeye is backed by the who’s who of Silicon Valley - Salesforce founder Marc Benioff, Yahoo co-founder Jerry Yang, Trinity Ventures, World Innovation Lab, and Accel-KKR. Responsibilities Develop new user-facing features Gain feedback continuously from users, customers, and colleagues Write reusable code and libraries (with matching documentation) to a standard which makes it quick and easy to maintain the code in the future Optimize application for maximum speed and scalability Collaborate with other team members and stakeholders Requirements Hands-on experience in Javascript, HTML, CSS. Hands-on experience in React JS & Redux , Web pack, ES6. Good understanding of asynchronous request handling, partial page updates, and AJAX Knowledge in Node JS Proficient understanding of code versioning tools 4 years or more total experience Why You'll Join Us: At Birdeye, we are relentless innovators driven by a singular goal: to lead our category with unparalleled excellence. We don't just set goals – we surpass them. We're a team of doers who roll up our sleeves and get the job done, delivering on our promises with unwavering dedication. Working here means embracing a culture of action and accountability, where every person is empowered to make an impact. We don't just talk about making a difference – we make it happen.
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France