Jobs
Interviews

4017 Versioning Jobs - Page 33

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 years

0 Lacs

Mohali district, India

On-site

Job Title: DevOps/MLOps Expert Location: Mohali (On-Site) Employment Type: Full-Time Experience: 6 + years Qualification: B.Tech CSE About the Role We are seeking a highly skilled DevOps/MLOps Expert to join our rapidly growing AI-based startup building and deploying cutting-edge enterprise AI/ML solutions. This is a critical role that will shape our infrastructure, deployment pipelines, and scale our ML operations to serve large-scale enterprise clients. As our DevOps/MLOps Expert , you will be responsible for bridging the gap between our AI/ML development teams and production systems, ensuring seamless deployment, monitoring, and scaling of our ML-powered enterprise applications. You’ll work at the intersection of DevOps, Machine Learning, and Data Engineering in a fast-paced startup environment with enterprise-grade requirements. Key Responsibilities MLOps & Model Deployment • Design, implement, and maintain end-to-end ML pipelines from model development to production deployment • Build automated CI/CD pipelines specifically for ML models using tools like MLflow, Kubeflow, and custom solutions • Implement model versioning, experiment tracking, and model registry systems • Monitor model performance, detect drift, and implement automated retraining pipelines • Manage feature stores and data pipelines for real-time and batch inference • Build scalable ML infrastructure for high-volume data processing and analytics Enterprise Cloud Infrastructure & DevOps • Architect and manage cloud-native infrastructure with focus on scalability, security, and compliance • Implement Infrastructure as Code (IaC) using Terraform , CloudFormation , or Pulumi • Design and maintain Kubernetes clusters for containerized ML workloads • Build and optimize Docker containers for ML applications and microservices • Implement comprehensive monitoring, logging, and alerting systems • Manage secrets, security, and enterprise compliance requirements Data Engineering & Real-time Processing • Build and maintain large-scale data pipelines using Apache Airflow , Prefect , or similar tools • Implement real-time data processing and streaming architectures • Design data storage solutions for structured and unstructured data at scale • Implement data validation, quality checks, and lineage tracking • Manage data security, privacy, and enterprise compliance requirements • Optimize data processing for performance and cost efficiency Enterprise Platform Operations • Ensure high availability (99.9%+) and performance of enterprise-grade platforms • Implement auto-scaling solutions for variable ML workloads • Manage multi-tenant architecture and data isolation • Optimize resource utilization and cost management across environments • Implement disaster recovery and backup strategies • Build 24x7 monitoring and alerting systems for mission-critical applications Required Qualifications Experience & Education • 4-8 years of experience in DevOps/MLOps with at least 2+ years focused on enterprise ML systems • Bachelor’s/Master’s degree in Computer Science, Engineering, or related technical field • Proven experience with enterprise-grade platforms or large-scale SaaS applications • Experience with high-compliance environments and enterprise security requirements • Strong background in data-intensive applications and real-time processing systems Technical Skills Core MLOps Technologies • ML Frameworks : TensorFlow, PyTorch, Scikit-learn, Keras, XGBoost • MLOps Tools : MLflow, Kubeflow, Metaflow, DVC, Weights & Biases • Model Serving : TensorFlow Serving, PyTorch TorchServe, Seldon Core, KFServing • Experiment Tracking : MLflow, Neptune.ai, Weights & Biases, Comet DevOps & Cloud Technologies • Cloud Platforms : AWS, Azure, or GCP with relevant certifications • Containerization : Docker, Kubernetes (CKA/CKAD preferred) • CI/CD : Jenkins, GitLab CI, GitHub Actions, CircleCI • IaC : Terraform, CloudFormation, Pulumi, Ansible • Monitoring : Prometheus, Grafana, ELK Stack, Datadog, New Relic Programming & Scripting • Python (advanced) - primary language for ML operations and automation • Bash/Shell scripting for automation and system administration • YAML/JSON for configuration management and APIs • SQL for data operations and analytics • Basic understanding of Go or Java (advantage) Data Technologies • Data Pipeline Tools : Apache Airflow, Prefect, Dagster, Apache NiFi • Streaming & Real-time : Apache Kafka, Apache Spark, Apache Flink, Redis • Databases : PostgreSQL, MongoDB, Elasticsearch, ClickHouse • Data Warehousing : Snowflake, BigQuery, Redshift, Databricks • Data Versioning : DVC, LakeFS, Pachyderm Preferred Qualifications Advanced Technical Skills • Enterprise Security : Experience with enterprise security frameworks, compliance (SOC2, ISO27001) • High-scale Processing : Experience with petabyte-scale data processing and real-time analytics • Performance Optimization : Advanced system optimization, distributed computing, caching strategies • API Development : REST/GraphQL APIs, microservices architecture, API gateways Enterprise & Domain Experience • Previous experience with enterprise clients or B2B SaaS platforms • Experience with compliance-heavy industries (finance, healthcare, government) • Understanding of data privacy regulations (GDPR, SOX, HIPAA) • Experience with multi-tenant enterprise architectures Leadership & Collaboration • Experience mentoring junior engineers and technical team leadership • Strong collaboration with data science teams , product managers , and enterprise clients • Experience with agile methodologies and enterprise project management • Understanding of business metrics , SLAs , and enterprise ROI Growth Opportunities • Career Path : Clear progression to Lead DevOps Engineer or Head of Infrastructure • Technical Growth : Work with cutting-edge enterprise AI/ML technologies • Leadership : Opportunity to build and lead the DevOps/Infrastructure team • Industry Exposure : Work with Government & MNCs enterprise clients and cutting-edge technology stacks Success Metrics & KPIs Technical KPIs • System Uptime : Maintain 99.9%+ availability for enterprise clients • Deployment Frequency : Enable daily deployments with zero downtime • Performance : Ensure optimal response times and system performance • Cost Optimization : Achieve 20-30% annual infrastructure cost reduction • Security : Zero security incidents and full compliance adherence Business Impact • Time to Market : Reduce deployment cycles and improve development velocity • Client Satisfaction : Maintain 95%+ enterprise client satisfaction scores • Team Productivity : Improve engineering team efficiency by 40%+ • Scalability : Support rapid client base growth without infrastructure constraints Why Join Us Be part of a forward-thinking, innovation-driven company with a strong engineering culture. Influence high-impact architectural decisions that shape mission-critical systems. Work with cutting-edge technologies and a passionate team of professionals. Competitive compensation, flexible working environment, and continuous learning opportunities. How to Apply Please submit your resume and a cover letter outlining your relevant experience and how you can contribute to Aaizel Tech Labs’ success. Send your application to hr@aaizeltech.com , bhavik@aaizeltech.com or anju@aaizeltech.com.

Posted 2 weeks ago

Apply

0.0 - 2.0 years

3 - 8 Lacs

Mohali, Punjab

On-site

The Role- As an AI Engineer , you will be responsible for building and optimizing AI-first solutions that power BotPenguin’s conversational and Agentic capabilities. You will work on LLM integrations, NLP pipelines, and machine learning models, while collaborating with cross-functional teams to deliver intelligent experiences at scale. This is a high-impact role that combines engineering, research, and deployment skills to solve real-world problems using artificial intelligence. What you need for this role- Education: Bachelor's or Master’s degree in Computer Science, Artificial Intelligence, Machine Learning, or a related discipline. Experience: 2–5 years of experience working in AI/ML or related software engineering roles. Technical Skills: Strong proficiency in Python and libraries such as scikit-learn, PyTorch, TensorFlow, Transformers (Hugging Face). Hands-on experience with LLMs (OpenAI, Claude, LLaMA) and building AI agents using API integrations. Experience working with NLP tasks (intent classification, text generation, embeddings, summarization). Familiarity with Vector Databases like Pinecone, FAISS, Elastic Vector DB. Understanding of Prompt Engineering, RAG (Retrieval-Augmented Generation), and embedding generation. Proficiency in building and deploying ML models via Docker/Kubernetes or cloud services like AWS/GCP. Experience with version control systems (GitLab/GitHub) and working in Agile teams. Soft Skills: Strong analytical thinking and problem-solving capabilities. Passion for research, innovation, and applying AI to real-world use-cases. Excellent communication skills and the ability to collaborate across departments. Attention to detail with a focus on model accuracy, explainability, and performance. What you will be doing- Design, build, and optimize AI-powered chatbot features and virtual agents using state-of-the-art models. Collaborate with the Product, Backend, and UI teams to integrate intelligent workflows into the BotPenguin platform. Build, evaluate, and fine-tune language models and NLP components tailored to user use-cases. Implement context-aware chat solutions using embeddings, vector stores, and retrieval mechanisms. Create internal tools for prompt testing, versioning, and debugging AI responses. Monitor model performance metrics such as latency, hallucination rate, and user satisfaction. Explore research papers, open-source innovations, and contribute to rapid experimentation. Write clean, modular, and testable code along with clear documentation for future scalability. Any other development related tasks as required for BotPenguin. Guiding, reviewing the code written by junior members in the team. Top reasons to work with us- Be part of a cutting-edge AI startup driving innovation in chatbot automation. Work with a passionate and talented team that values knowledge-sharing and problem-solving. Growth-oriented environment with ample learning opportunities. Exposure to top-tier global clients and projects with real-world impact. Flexible work hours and an emphasis on work-life balance. A culture that fosters creativity, ownership, and collaboration. Job Type: Full-time Pay: ₹300,000.00 - ₹800,000.00 per year Benefits: Flexible schedule Health insurance Provident Fund Ability to commute/relocate: Mohali, Punjab: Reliably commute or planning to relocate before starting work (Required) Experience: AI: 2 years (Required) Work Location: In person

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Kochi, Kerala, India

On-site

Senior Software Developer Angular Responsible for developing and maintaining the front end of web applications using Angular. They work with back-end developers to create user interfaces that are both functional and visually appealing. Experience • 4+ Years Responsibilities • Expertise in frontend technologies including Angular 6+. • Skilled at developing web-based applications using these technologies ensuring high performance and responsiveness to requests from the front-end. • Practitioner level exposure in both frontend & backend technologies, with expertise in Angular. • Good understanding of front-end technologies, such as JavaScript, HTML5, and CSS3 • Good understanding of relational databases and Structured Query Language (SQL), using any of the available RDBMS • Basic knowledge in Typescript • Experience in Cloud platform – AWS is a plus • Exposure to code versioning tools such like SVN or GIT • Ability to identify Design Patterns and do independent Code Reviews • Familiarity with database technologies such as MySQL, Oracle, and MongoDB • Ability to write cross-browser compatible code • Proficiency in building JavaScript tools like Grunt and Gulp • Advanced knowledge of unit testing best practices and Continuous Integration processes (CI/CD) • Candidate with innovative thinking and has appetite towards system architecture. • Candidates having strong skills in object oriented development, analysis and design and fair exposure to design patterns are preferred. Qualifications • Bachelor’s/Master’s degree in computer science, information technology • Relevant experience of at least 4+ years of experience working in Frontend Language- Angular • Good communication and presentation skills • Ability to work collaboratively in a fast-paced, agile environment • Ability to work independently and as part of a tea

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Minimum qualifications: Bachelor's degree in Computer Science or related technical field, or equivalent practical experience. 8 years of experience in Technical Program Management. Experience with Machine learning and Generative Artificial Intelligence or Large Language Models. Preferred qualifications: Ability to work with technical engineering teams across multiple organizations. Ability to think, execute, communicate and influence at multiple levels, including executive leadership, managers, engineers and geographically dispersed stakeholders to develop relationships and drive effective execution. Ability to learn new technologies. About The Job A problem isn’t truly solved until it’s solved for all. That’s why Googlers build products that help create opportunities for everyone, whether down the street or across the globe. As a Technical Program Manager at Google, you’ll use your technical expertise to lead complex, multi-disciplinary projects from start to finish. You’ll work with stakeholders to plan requirements, identify risks, manage project schedules, and communicate clearly with cross-functional partners across the company. You're equally comfortable explaining your team's analyses and recommendations to executives as you are discussing the technical tradeoffs in product development with engineers. In this role, you will work on the AI Answers feature of Google Search. You will be working with teams of engineers and product managers, and delivering on a roadmap aimed at revolutionizing the Google Search experience with the latest GenAI model capabilities. As a Technical Program Manager, your responsibilities will include managing multiple AI projects undertaken by the engineers. You will identify risks, resolve roadblocks, and enhance the team's development velocity. In Google Search, we're reimagining what it means to search for information – any way and anywhere. To do that, we need to solve complex engineering challenges and expand our infrastructure, while maintaining a universally accessible and useful experience that people around the world rely on. In joining the Search team, you'll have an opportunity to make an impact on billions of people globally. Responsibilities Define, scope, and manage the AI projects and ensure timely delivery and adherence to scope. Build detailed project plans, track progress , manage risks and implement mitigation strategies. Collaborate with other teams which work on AI projects, ensuring synergy through coordinated timelines, dependencies, and deliverables. Identify and resolve blockers and escalate issues when necessary. Communicate project status and updates to stakeholders regularly. Facilitate meetings and reviews to drive decision-making and alignment. Introduce standard procedures for reproducibility, data/model versioning, experimentation tracking, and agile methodology in the AI space. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form .

Posted 2 weeks ago

Apply

5.0 - 7.0 years

0 Lacs

Budaun Sadar, Uttar Pradesh, India

On-site

Job Title: Senior .Net Developer Experience: 5 to 7 Years Location: Gurgaon Mandatory Skills & Experience .NET Framework & .NET 8: Strong hands-on experience working with both .NET Framework 4.8 and the latest .NET 8. Capable of developing, maintaining, and migrating applications across both versions. API Development: Proven expertise in creating RESTful APIs using C# and .NET Core/.NET 8. Knowledge of secure API design principles, versioning, and performance optimization. Angular Front-End Integration: Experience integrating Angular-based front-end applications with .NET APIs. Should be comfortable with consuming APIs in Angular, handling data-binding, and managing components. SQL Server: Proficient in SQL Server, including writing complex queries, stored procedures, indexing, and performance tuning. Should understand database design and relational concepts. Desktop Application Development: Some experience with .NET Windows Forms or WPF desktop applications is required. Familiarity with legacy desktop app migration or enhancement is a plus. Application Modernization / Migration: Hands-on experience in modernizing legacy applications, including migrating older .NET apps to .NET Core/.NET 8. Exposure to containerization (Docker) or cloud readiness for legacy apps is desirable. Nice To Have Logistics and Shipping Domain Knowledge: Previous experience working on projects in logistics, supply chain, or shipment tracking systems. Understanding of domain-specific concepts like order management, tracking, freight billing, etc., is a strong advantage.

Posted 2 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Walk-In For Java Developers At Zuci Systems - 19th Jul 2025 (Saturday)! "We are hiring - You are wanted!" If your profile is matching with the below mentioned job description please walk-in directly to the below venue. Walk In Date: 19-Jul-25 (Saturday) Time : 9:00 AM to 3:00 PM Venue : Zuci Systems, ASV Chandilya Towers, 7th Floor, 5, 397, Old Mahabalipuram Rd, Nehru Nagar, Thoraipakkam, Tamil Nadu 600096. Experience : 4 to 8 Years Notice Period: Immediate Joiners only Role & Responsibilities 4 - 8 years in developing Java and Spring Boot applications. Solid understanding of object-oriented programming Experience in various design and architectural patterns Strong experience in the below technologies Angular 17 RESTful Web Service Relational Database XML/XSLT/XSL-FO/XPath/XQuery JavaScript TypeScript JUnit Experience in developing & scaling software using AWS services Experience in code versioning tools & continuous integration Good understanding of Agile process Good oral and written communication in English Mandatory : Kindly acknowledge your acceptance by sending a mail to shebuel.s@zucisystems.com Documents to carry: Hard copy resume 2 Passport size photograph Mention Recruiter Shebuel on the Top Right Corner of your resume Regards, Shebuel Samuel Savel Team TA | Zuci Systems

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

DETAILED RESPONSIBILITIES/DUTIES: Developing interfaces and web services using various integration technologies like Oracle SOA/OSB Developing on-cloud and on-premise solutions Develop, support, maintain, and implement complex business solutions with fault-tolerant integration solutions. Deliver solutions on API led approach. Designing, developing, and managing APIs using API gateway/portal Creating micro services using technologies like Spring Boot. Develop different flavors of services like web service, REST, messaging service, event-based service etc. Build services with highest standards of security Develop SQL/PLSQL code where necessary. Provide solutions using core java/ spring. Develop integrations between different databases (Oracle, SQL Server, No SQL), File servers. Provide experience and strategy in building integrations for business continuity. Review existing integrations and propose road map to rollout to latest version of Software. Define architecture solutions based on business needs with future reusability. Build and develop secured B2B integration solution with third party vendors Developing API documentation with RAML, swagger and deploying services on both on-premise and cloud content servers. Refine integration build processes from inception to production. Provide experience on migration strategies for hot deployments of code components from Dev to higher environments. Fine tune solutions/services for optimum performance. Work with scripts to monitor various business critical components at runtime. SUPERVISORY RESPONSIBILITIES: None REQUIRED QUALIFICATIONS: Skills: Must have hands-on experience with production deployment and postproduction support. Must have Strong experience on various Middleware Integration technologies, Adapters, Queueing. Working experience on Microservices. Knowledge of various integration concepts including: Business-to-Business (B2B), platform-to-platform and EDI Trading Partner Integration development. Good exposure to Github, Subversion or other versioning tools. Must have experience in JAVA and spring. Experience with a large ESB implementation with any platform would be an added advantage. Good understanding of data formats like XML, JSON, EDI, CSV, NVP. Good understanding of integration technologies like HTTP, XML/XSLT, JMS, JDBC, REST, SOAP, Webservices and APIs. Must have strong knowledge of various middleware integration strategies. Strong analytical and problem-solving skills with excellent verbal and oral communication are mandatory. Strong organizational skills with the ability to multi-task, prioritize and execute on assigned deliverables. Ability to work effectively with minimal supervision and guidance. Good exposure to webservice/ API security. In-depth knowledge of applications code registration procedures Good working knowledge in Unix/Linux shell scripting Ability to identify system impact for small- and large-scale initiatives Ability to interact effectively at all levels with sensitivity to cultural diversity. Experience: 5+ years of experience in IT/Technology industry. 4+ years of experience in Webservices/Interfaces development, design and architecture. 3+ years of experience with Databases. 3+ years of experience of Java/J2EE development. 3 years of development experience B2B. 3+ years of experience with Middleware code migrations. Experience with change management tools and processes, including source code control, versioning, branching, defect tracking and release management.

Posted 2 weeks ago

Apply

0.0 - 1.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Job Title: Senior Backend Developer (Python, FastAPI & MongoDB) Location : Bengaluru, India Company Overview: IAI Solution Pvt Ltd (www.iaisolution.com ),operates at the edge of applied AI where foundational research meets real-world deployment. We craft intelligent systems that think in teams, adapt with context, and deliver actionable insight across domains. We are seeking a Senior Backend Developer who thrives in fast-paced environments, enjoys solving complex technical challenges, and is passionate about building scalable, high-performance backend systems that power real-world AI applications.. Position Summary: We are looking for a Senior Backend Developer with 3 to 5 years of professional experience in Python-based development, especially using FastAPI and MongoDB . The ideal candidate is skilled in building and maintaining scalable, high-performance back-end services and APIs, has strong understanding of modern database design (SQL & NoSQL), and has experience integrating backend services with cloud platforms. Experience or interest in AI/ML projects is a strong plus, as our products often interface with LLMs and real-time AI pipelines. Key Responsibilities: Design, build, and maintain robust backend services using Python and FastAPI . Develop and maintain scalable RESTful APIs for internal tools and third-party integrations. Work with MongoDB , PostgreSQL , and Redis to manage structured and unstructured data efficiently. Collaborate with frontend, DevOps, and AI/ML teams to deliver secure and performant backend infrastructure. Implement best practices in code architecture, performance optimization, logging, and monitoring. Ensure APIs and systems are production-ready, fault-tolerant, and scalable. Handle API versioning, documentation (Swagger/OpenAPI), and error management. Optimize queries, indexes, and DB schema for high-performance data access. Maintain clean code with emphasis on object-oriented principles and modular design. Troubleshoot production issues and deliver timely fixes and improvements. Qualifications: Overall Experience : 3 to 5 years in backend software development. Python : Strong proficiency with object-oriented programming. Frameworks : Hands-on experience with FastAPI (preferred), Django. Databases : MongoDB : Experience with schema design, aggregation pipelines, and indexing. Familiarity with SQL databases (PostgreSQL/MySQL). Experience with Redis and optionally Supabase. API Development : Proficient in building and documenting REST APIs. Strong understanding of HTTP, request lifecycles, and API security. Testing & Debugging : Strong debugging and troubleshooting skills using logs and tools. Performance & Scalability : Experience optimizing backend systems for latency, throughput, and reliability. Tools : Git, Docker, Linux commands for development environments. Must-Have Skills: Proficiency in Python and object-oriented programming Strong hands-on experience with FastAPI (or similar async frameworks) Knowledge of MongoDB for schema-less data storage and complex queries Experience building and managing REST APIs in production Comfortable working with Redis , PostgreSQL , or other data stores Experience with Dockerized environments and Git workflows Solid grasp of backend architecture , asynchronous programming, and performance tuning Ability to write clean, testable, and maintainable code Good-to-Have Skills: Experience with asynchronous programming using async/await Integration with third-party APIs (e.g., Firebase, GCP, Azure services) Basic understanding of WebSocket and real-time backend patterns Exposure to AI/ML pipelines , model APIs, or vector DBs (e.g., FAISS) Basic DevOps exposure: GitHub Actions, Docker Compose, Nginx Familiarity with JWT , OAuth2, and backend security practices Familiarity with CI/CD pipelines and versioning Basic understanding of GraphQL or gRPC is a plus Preferred Qualifications: Bachelor’s degree in Computer Science , Engineering , or a related field Demonstrated experience delivering production-grade backend services Experience working in agile teams and using tools like Jira Familiarity with Agile/Scrum methodologies and sprint cycles Interest or experience in AI/ML-integrated systems is a plus Perks & Benefits: Competitive salary with performance-based bonuses Opportunity to work on AI-integrated platforms and intelligent products Access to latest tools, cloud platforms, and learning resources Support for certifications and tech conferences Flexible working hours and hybrid work options Wellness initiatives and team-building activities Job Type: Full-time Pay: Up to ₹1,500,000.00 per year Benefits: Health insurance Paid sick time Provident Fund Schedule: Fixed shift Ability to commute/relocate: Bangalore City, Karnataka: Reliably commute or planning to relocate before starting work (Required) Experience: Python: 1 year (Required) FastAPI: 1 year (Required) Location: Bangalore City, Karnataka (Required) Work Location: In person Speak with the employer +91 9003562294

Posted 2 weeks ago

Apply

3.0 - 5.0 years

0 Lacs

Sahibzada Ajit Singh Nagar, Punjab, India

On-site

PHP Laravel Developer (3 to 5 Years Experience) Location: Phase 8B, Mohali Work Mode: Work from Office Experience: 3 to 5 Years Salary: As per industry standards Job Summary: We are looking for a highly skilled PHP Laravel Developer with strong expertise in both frontend and backend development . You will be responsible for developing and maintaining web applications, working closely with the design and project teams to deliver high-quality solutions. Key Responsibilities: Develop, test, and maintain scalable web applications using Laravel framework Build RESTful APIs and integrate third-party services Manage frontend development using HTML, CSS, JavaScript, jQuery , and frameworks like Vue.js or React (preferred) Optimize application for speed, scalability, and security Collaborate with UI/UX designers, project managers, and QA teams Write clean, reusable, and well-documented code Debug and troubleshoot issues across the full stack Maintain code versioning using Git Key Skills & Requirements: Strong experience in Laravel , PHP , and MySQL Proficient in HTML5, CSS3, JavaScript, Bootstrap Familiarity with frontend frameworks (Vue.js, React, or similar) Experience with API integration (REST, JSON, XML) Good understanding of OOP, MVC architecture , and design patterns Experience with version control systems like Git Knowledge of deployment on shared/VPS servers Excellent problem-solving skills and ability to work independently

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Title: Senior Test Automation Lead – Playwright (AI/ML Focus) Location: Hyderabad Job Type: Full-Time Experience Required: 8+ years in Software QA/Testing, 3+ years in Test Automation using Playwright, 2+ years in AI/ML project environments --- About the Role: We are seeking a passionate and technically skilled Senior Test Automation Lead with deep experience in Playwright-based frameworks and a solid understanding of AI/ML-driven applications. In this role, you will lead the automation strategy and quality engineering practices for next-generation AI products that integrate large-scale machine learning models, data pipelines, and dynamic, intelligent UIs. You will define, architect, and implement scalable automation solutions across AI-enhanced features such as recommendation engines, conversational UIs, real-time analytics, and predictive workflows, ensuring both functional correctness and intelligent behavior consistency. --- Key Responsibilities: Test Automation Framework Design & Implementation · Design and implement robust, modular, and extensible Playwright automation frameworks using TypeScript/JavaScript. · Define automation design patterns and utilities that can handle complex AI-driven UI behaviors (e.g., dynamic content, personalization, chat interfaces). · Implement abstraction layers for easy test data handling, reusable components, and multi-browser/platform execution. AI/ML-Specific Testing Strategy · Partner with Data Scientists and ML Engineers to understand model behaviors, inference workflows, and output formats. · Develop strategies for testing non-deterministic model outputs (e.g., chat responses, classification labels) using tolerance ranges, confidence intervals, or golden datasets. · Design tests to validate ML integration points: REST/gRPC API calls, feature flags, model versioning, and output accuracy. · Include bias, fairness, and edge-case validations in test suites where applicable (e.g., fairness in recommendation engines or NLP sentiment analysis). End-to-End Test Coverage · Lead the implementation of end-to-end automation for: o Web interfaces (React, Angular, or other SPA frameworks) o Backend services (REST, GraphQL, WebSockets) o ML model integration endpoints (real-time inference APIs, batch pipelines) · Build test utilities for mocking, stubbing, and simulating AI inputs and datasets. CI/CD & Tooling Integration · Integrate automation suites into CI/CD pipelines using GitHub Actions, Jenkins, GitLab CI, or similar. · Configure parallel execution, containerized test environments (e.g., Docker), and test artifact management. · Establish real-time dashboards and historical reporting using tools like Allure, ReportPortal, TestRail, or custom Grafana integrations. Quality Engineering & Leadership · Define KPIs and QA metrics for AI/ML product quality: functional accuracy, model regression rates, test coverage %, time-to-feedback, etc. · Lead and mentor a team of automation and QA engineers across multiple projects. · Act as the Quality Champion across the AI platform by influencing engineering, product, and data science teams on quality ownership and testing best practices. Agile & Cross-Functional Collaboration · Work in Agile/Scrum teams; participate in backlog grooming, sprint planning, and retrospectives. · Collaborate across disciplines: Frontend, Backend, DevOps, MLOps, and Product Management to ensure complete testability. · Review feature specs, AI/ML model update notes, and data schemas for impact analysis. --- Required Skills and Qualifications: Technical Skills: · Strong hands-on expertise with Playwright (TypeScript/JavaScript). · Experience building custom automation frameworks and utilities from scratch. · Proficiency in testing AI/ML-integrated applications: inference endpoints, personalization engines, chatbots, or predictive dashboards. · Solid knowledge of HTTP protocols, API testing (Postman, Supertest, RestAssured). · Familiarity with MLOps and model lifecycle management (e.g., via MLflow, SageMaker, Vertex AI). · Experience in testing data pipelines (ETL, streaming, batch), synthetic data generation, and test data versioning. Domain Knowledge: · Exposure to NLP, CV, recommendation engines, time-series forecasting, or tabular ML models. · Understanding of key ML metrics (precision, recall, F1-score, AUC), model drift, and concept drift. · Knowledge of bias/fairness auditing, especially in UI/UX contexts where AI decisions are shown to users. Leadership & Communication: · Proven experience leading QA/Automation teams (4+ engineers). · Strong documentation, code review, and stakeholder communication skills. · Experience collaborating in Agile/SAFe environments with cross-functional teams. --- Preferred Qualifications: · Experience with AI Explainability frameworks like LIME, SHAP, or What-If Tool. · Familiarity with Test Data Management platforms (e.g., Tonic.ai, Delphix) for ML training/inference data. · Background in performance and load testing for AI systems using tools like Locust, JMeter, or k6. · Experience with GraphQL, Kafka, or event-driven architecture testing. · QA Certifications (ISTQB, Certified Selenium Engineer) or cloud certifications (AWS, GCP, Azure). --- Education: · Bachelor’s or Master’s degree in Computer Science, Software Engineering, or related technical discipline. · Bonus for certifications or formal training in Machine Learning, Data Science, or MLOps. --- Why Join Us? · Work on cutting-edge AI platforms shaping the future of [industry/domain]. · Collaborate with world-class AI researchers and engineers. · Drive the quality of products used by [millions of users / high-impact clients]. · Opportunity to define test automation practices for AI—one of the most exciting frontiers in tech.

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Title : Data Engineer Experience : 4-9 Years Location : Noida, Chennai & Pune Skills : Python, Pyspark, Snowflake & Redshift Key Responsibilities • Migration & Modernization • Lead the migration of data pipelines, models, and workloads from Redshift to Snowflake/Yellowbrick. • Design and implement landing, staging, and curated data zones to support scalable ingestion and consumption patterns. • Evaluate and recommend tools and frameworks for migration, including file formats, ingestion tools, and orchestration. • Design and build robust ETL/ELT pipelines using Python, PySpark, SQL, and orchestration tools (e.g., Airflow, dbt). • Support both batch and streaming pipelines, with real-time processing via Kafka, Snowpipe, or Spark Structured Streaming. • Build modular, reusable, and testable pipeline components that handle high volume and ensure data integrity. • Define and implement data modeling strategies (star, snowflake, normalization/denormalization) for analytics and BI layers. • Implement strategies for data versioning, late-arriving data, and slowly changing dimensions. • Implement automated data validation and anomaly detection (using tools like dbt tests, Great Expectations, or custom checks). • Build logging and alerting into pipelines to monitor SLA adherence, data freshness, and pipeline health. • Contribute to data governance initiatives including metadata tracking, data lineage, and access control. Required Skills & Experience • 10+ years in data engineering roles with increasing responsibility. • Proven experience leading data migration or re-platforming projects. • Strong command of Python, SQL, and PySpark for data pipeline development. • Hands-on experience with modern data platforms like Snowflake, Redshift, Yellowbrick, or BigQuery. • Proficient in building streaming pipelines with tools like Kafka, Flink, or Snowpipe. • Deep understanding of data modeling, partitioning, indexing, and query optimization. • Expertise with ETL orchestration tools (e.g., Apache Airflow, Prefect, Dagster, or dbt). • Comfortable working with large datasets and solving performance bottlenecks. • Experience in designing data validation frameworks and implementing DQ rules.

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About Company : They balance innovation with an open, friendly culture and the backing of a long-established parent company, known for its ethical reputation. We guide customers from what’s now to what’s next by unlocking the value of their data and applications to solve their digital challenges, achieving outcomes that benefit both business and society. About Client: Our client is a global digital solutions and technology consulting company headquartered in Mumbai, India. The company generates annual revenue of over $4.29 billion (₹35,517 crore), reflecting a 4.4% year-over-year growth in USD terms. It has a workforce of around 86,000 professionals operating in more than 40 countries and serves a global client base of over 700 organizations. Our client operates across several major industry sectors, including Banking, Financial Services & Insurance (BFSI), Technology, Media & Telecommunications (TMT), Healthcare & Life Sciences, and Manufacturing & Consumer. In the past year, the company achieved a net profit of $553.4 million (₹4,584.6 crore), marking a 1.4% increase from the previous year. It also recorded a strong order inflow of $5.6 billion, up 15.7% year-over-year, highlighting growing demand across its service lines. Key focus areas include Digital Transformation, Enterprise AI, Data & Analytics, and Product Engineering—reflecting its strategic commitment to driving innovation and value for clients across industries. Job Title:-senior software engineer(AWS/Java) Location : Hyderabad,pune Experience : 5+Years Job Type : Contract to hire . Notice Period :- Immediate joiners. Detailed JD: senior software engineer(AWS/Java) We are looking for talented experienced Senior software engineer with expertise in AWS cloud services, Typescript and Java development for our engineering team. Responsibilities: • Implementing cloud applications using AWS services, Typescript and Java. • Write clean, maintainable and efficient code while adhering to best practices and coding standards. • Work closely with product manager and engineers in to define and refine requirements. • Provide technical guidance and mentorship to junior engineers in team. • Troubleshoot and resolve complex technical issues and performance bottlenecks. • Create and maintain technical documentation for code and processes. • Stay up-to-date with industry trends and emerging technologies to continuously improve our development practices. Mandatory Skills: • 5+ years of software development experiences with focus on AWS cloud development and distributed applications development with Java & J2EE. • 1+ years of experience in AWS development using typescript. If not worked on typescript, willing to learn typescript because as per Principal standards typescript is the preferred language for AWS development. • Hands on experience and deploying applications on AWS cloud infrastructure(e.g., EC2, Lambda, S3, DynamoDB, RDS, API Gateway, EventBridge, SQS, SNS, Fargate etc). • Strong Hands on experience in Java/J2EE, Spring, Spring boot development and good understanding of serverless computing. • Experience with REST API and Java Shared Libraries. Good to have: • AWS Cloud practitioner, AWS Certified Developer or AWS certified solutions architect is plus. Requirements: • Strong knowledge on Java Development/Versioning Tools like IntelliJ/Git/Maven. • Installation, Configuration and Integration of tools for creating the required development environment. • Experience on handling Install failures, install updates, supporting local issues is a plus. • Understanding of application server technology. • String analytical and problem solving skills with keen attention to detail. • Excellent verbal and written communication skills with the ability to articulate complex technical concepts to various audiences. • Experience working on agile development environments and familiarity with CI/CD pipelines. • Consistently raises the bar by going beyond day-to-day performance expectations. Qualifications: • Bachelors degree in engineering and related field Seniority Level Mid-Senior level Industry IT Services and IT Consulting Employment Type Contract Job Functions Business Development Skills Amazon Web Services (AWS) Git Java Attention to Detail TypeScript Written Communication Software Development Jakarta EE Object-Oriented Programming (OOP) Back-End Web Development

Posted 2 weeks ago

Apply

0.0 - 10.0 years

0 Lacs

Bengaluru, Karnataka

On-site

3 months ago TESCRA India DESCRIPTION Attached above is the detailed JD for your reference: What you'll Do System integration of heterogeneous data sources and working on technologies used in the design, development, testing, deployment, and operations of DW & BI solutions Create and maintain documentation, architecture designs and data flow diagrams Help to deliver scalable solutions on the MSBI platforms and Hadoop Implement source code versioning, standard methodology tools and processes for ensuring data quality Collaborate with business professionals, application developers and technical staff working in an agile process environment Assist in activities such as Source System Analysis, creating and documenting high level business model design, UAT, project management etc. Skills What you need to succeed 3+ years of relevant work experience SSIS, SSAS, DW, Data Analysis and Business Intelligence. Must have expert knowledge of Data warehousing tools – SSIS, SSAS, DB Must have expert knowledge of TSQL, stored procedure, database performance tuning. Strong in Data Warehousing, Business Intelligence and Dimensional Modelling concepts with experience in Designing, developing & maintaining ETL, database & OLAP Schema and Public Objects (Attributes, Facts, Metrics etc.) Good to have experience in developing Reports and Dashboards using BI reporting tools like Tableau, SSRS, Power BI etc. Fast learner, analytical and skill to understand multiple businesses, their performance indicators Bachelors' degree in Computer Science or equivalent Superb communication and presentation skills QUALIFICATIONS Must Have Skills BI DWH DATA WAREHOUSING SSIS SQL Minimum Education Level Bachelors or Equivalent Years of Experience 3-10 years ADDITIONAL INFORMATION Work Type: FullTime Location: Bangalore, Karnataka, India Job ID: Tescra-Ado-889DB6

Posted 2 weeks ago

Apply

0.0 years

0 Lacs

Chennai, Tamil Nadu

On-site

IT Full-Time Job ID: DGC00927 Chennai, Tamil Nadu 5-7 Yrs ₹10 - ₹12 Yearly Skills Core Java, Spring MVC, Spring Boot, Hibernate, JPA, Apache Tomcat, SQL, Eclipse, GIT/SVN, Maven Roles and Responsibilities Should have strong experience in Core Java, Spring MVC, Spring Boot Should have a good understanding of OOP design techniques Expert knowledge of build tools and dependency management (maven) Experience building Web Services (REST/SOAP XML) Experience with unit and automation testing (Junit) Good understanding of SQL and relational databases and NO SQL databases Familiarity with design patterns and should be able to design small to medium complexity modules independently Experience with Agile or similar development methodologies Experience with a versioning system (e.g., CVS/SVN/Git) Strong verbal communications, cross-group collaboration skills, analytical, structured and strategic thinking. Good in understanding business requirements and user stories Good in designing software which best suits business requirements Good knowledge in Java technologies Quick learner of new technologies

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

About the Job Company Description: www.nuagebiz.tech At Nuage we develop customized digital solutions for various business needs. Our mission is to help our clients achieve their goals by using the most suitable and innovative technologies for their specific challenges. We are committed to delivering high-quality services and products that meet or exceed our clients’ expectations. We are a mix of young and not-so-young techies and innovators specializing as designers, analysts, architects and coders, working to achieve growth by delivering what our customers need. Job Description: We are looking for a Senior Python Developer with 6+ years of experience , including hands-on development using Python for at least 4 years. This is a high-impact role for a self-driven professional with an analytical mind, passion for problem-solving, and strong coding skills. You will be a key part of our analytical product development team , writing high-performance application code and collaborating with frontend engineers and product stakeholders. You must bring a detail-oriented mindset, a collaborative spirit, and an appetite for growth and innovation. Key Responsibilities: Write reusable, testable, and efficient Python code Design and implement low-latency, high-availability, and high-performance applications Integrate front-end elements into backend logic Ensure application security and data protection standards Work with multiple data sources and databases in a unified system Collaborate with cross-functional teams to ship new features and functionality Contribute to code reviews, technical discussions, and knowledge sharing Qualifications: Bachelor's degree in Computer Science, Software Engineering, or related field 6+ years of experience in IT, with a minimum of 4 years working on Python Strong grasp of Flask and FastAPI frameworks Experience with event-driven programming in Python Knowledge of user authentication and authorization across multiple systems Understanding of scalable architecture and design principles Familiarity with code versioning tools like Git, Mercurial, or SVN Ability to optimize outputs for different delivery platforms (desktop, mobile) Excellent time management, communication, and multitasking skills A collaborative mindset and strong interpersonal skills

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Responsibilities Design and build advanced applications for the iOS platform. Collaborate with cross-functional teams to define, design, and ship new features. Unit-test code for robustness, including edge cases, usability, and general reliability. Work on bug fixing and improving application performance. Continuously discover, evaluate, and implement new technologies to maximize development : Proven experience as an iOS Developer with a strong portfolio of iOS applications. In-depth knowledge of Swift programming language and the iOS SDK. In-depth knowledge of Swift UI is must. Strong understanding of architecture patterns (MVC, MVVM, etc.) and best practices. Experience with performance and memory tuning tools. Familiarity with RESTful APIs to connect iOS applications to back-end services. Solid understanding of the full mobile development life cycle. Proficient understanding of code versioning tools such as Git. Strong problem-solving skills and a knack for writing clean, readable, and maintainable code. Up-to-date with the latest industry trends and : Bachelor's degree in Computer Science, Engineering, or a related field. 8+ years of professional experience in iOS application development. Proven track record of delivering high-quality, scalable, and innovative mobile applications. (ref:hirist.tech)

Posted 2 weeks ago

Apply

0 years

0 Lacs

Delhi Cantonment, Delhi, India

On-site

Job Description Of Flutter Developer Roles and Responsibilities : Work with Product owners and engineering managers to understand product roadmap. Contribute to designing technical specification artefacts, documentation, diagrams (HLD, LLD, TRD) and accordingly provide technical and functional recommendations. Designing and building advanced mobile features and custom UI. Hands-on coding code the hairiest most complicated paths/components. Craft APIs, RPCs and streamlining topologies which are simple, and efficient. Ensuring responsiveness of applications. Working alongside graphic designers for web design features. Collaborating with cross-functional teams to define, design, and ship new features. Optimizing the app for cross-platform speed, memory, network and battery. Keeping the app stable and secure at all times. Continuously discovering, evaluating, and implementing new technologies and processes to maximize development efficiency. Comprehensively testing code for robustness, including edge cases, usability, and general reliability. Follow good coding practices, agile engineering processes, DevSecOpsSRE toolchain and complying with existing quality standard. Perform system failure analysis and provide corrective actions. Recommend new technologies to improve system performance and reliability. Ensure feature KPIs / matrices and ensure release objectives are met by delivering high-quality products. Skills Experience with Flutter and Dart, should have developed at least 1 application from scratch. Should have a good understanding of at least one programming language like Java, Kotlin, C#, Swift. Knowledge of OPPs and Basic concepts like factory constructor, spread operators. Good understanding of asynchronous request handling and partial page updates. Knowledge of modern authorisation mechanisms and design patterns. Experience in test driven development. Proficient understanding of code versioning tools, such as Git, Bitbucket etc. Experience with deployment of applications (PlayStore and AppStore). Experience of programming in Android and iOS will be a big plus. Proficient understanding of cross-platform compatibility. Experience with web sockets will be a big plus. (ref:hirist.tech)

Posted 2 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Microsoft’s Cloud business is expanding, and the Cloud Supply Chain (CSCP) organization is responsible for enabling the hardware infrastructure underlying this growth including AI! CSCP’s vision is to empower customers to achieve more by delivering Cloud and AI capabilities at scale. Our mission is to deliver the world's computer with an industry-leading supply chain. The CSCP organization is responsible for traditional supply chain functions such as plan, source, make, deliver, but also manages supportability (spares), sustainability, and decommissioning of datacenter assets worldwide. We deliver the core infrastructure and foundational technologies for Microsoft's over 200 online businesses including Bing, MSN, Office 365, Xbox Live, OneDrive and the Microsoft Azure platform for external customers. Our infrastructure is supported by more than 300 datacenters around the world that enable services for more than 1 billion customers in over 90 countries. Microsoft Cloud Planning (MCP) is the central planning function within CSCP focused on forecasting, demand planning, and supply planning for all Microsoft Cloud services and associated hardware, directly impacting the success of Microsoft's cloud business. Responsibilities Researching and developing production-grade models (forecasting, anomaly detection, optimization, clustering, etc.) for our global cloud business by using statistical and machine learning techniques. Manage large volumes of data, and create new and improved solutions for data collection, management, analyses, and data science model development. Drive the onboarding of new data and the refinement of existing data sources through feature engineering and feature selection. Apply statistical concepts and cutting-edge machine learning techniques to analyze cloud demand and optimize our data science model code for distributed computing platforms and task automation. Work closely with other data scientists and data engineers to deploy models that drive cloud infrastructure capacity planning. Present analytical findings and business insights to project managers, stakeholders, and senior leadership and keep abreast of new statistical / machine learning techniques and implement them as appropriate to improve predictive performance. Oversees and directs the plan or forecast across the company for demand planning. Evangelizes the demand plan with other leaders. Drives clarity and understanding of what is required to achieve the plan (e.g., promotions, sales resources, collaborative planning, forecasting, and replenishment [CPFR], budget, engineering changes) and assesses plans to mitigate potential risks and issues. Oversees the analysis of data and leads the team in identifying trends, patterns, correlations, and insights to develop new forecasting models and improve existing models. Oversees development of short and long term (e.g., weekly, monthly, quarterly) demand forecasts and develops and publishes key forecast accuracy metrics. Analyzes data to identify potential sources of forecasting error. Serves as an expert resource and leader of demand planning across the company and ensures that business drivers are incorporated into the plan (e.g., forecast, budget). Leads collaboration among team and leverages data to identify pockets of opportunity to apply state-of-the-art algorithms to improve a solution to a business problem. Consistently leverages knowledge of techniques to optimize analysis using algorithms. Modifies statistical analysis tools for evaluating Machine Learning models. Solves deep and challenging problems for circumstances such as when model predictions are not correct, when models do not match the training data or the design outcomes when the data is not clean when it is unclear which analyses to run, and when the process is ambiguous. Provides coaching to team members on business context, interpretation, and the implications of findings. Interprets findings and their implications for multiple businesses, and champions methodological rigor by calling attention to the limitations of knowledge wherever biases in data, methods, and analysis exist. Generates and leverages insights that inform future studies and reframe the research agenda. Informs both current business decisions by implementing and adapting supply-chain strategies through complex business intelligence. Connects across functional teams and the broader organization outside of Demand Planning to advocate for continuous improvement and maintain best practices. Leads broad governance and rhythm of the business processes that ensure cross-group collaboration, discussion of key issues, and an opportunity to build proposed solutions to address current or future business needs. Qualifications Required: M.Sc. in Statistics, Applied Mathematics, Applied Economics, Computer Science or Engineering, Data Science, Operations Research or similar applied quantitative field 4-8 years of industry experience in developing production-grade statistical and machine learning code in a collaborative team environment. Prior experience in machine learning using R or Python (scikit / numpy / pandas / statsmodel). Prior experience in time series forecasting. Prior experience with typical data management systems and tools such as SQL. Knowledge and ability to work within a large-scale computing or big data context, and hands-on experience with Hadoop, Spark, DataBricks or similar. Excellent analytical skills; ability to understand business needs and translate them into technical solutions, including analysis specifications and models. Creative thinking skills with emphasis on developing innovative methods to solve hard problems under ambiguity and no obvious solutions. Good interpersonal and communication (verbal and written) skills, including the ability to write concise and accurate technical documentation and communicate technical ideas to non-technical audiences. Preferred PhD in Statistics, Applied Mathematics, Applied Economics, Computer Science or Engineering, Data Science, Operations Research or similar applied quantitative field. Experience in machine learning using R or Python (scikit / numpy / pandas / statsmodel) with skill level at or near fluency. Experience with deep learning models (e.g., tensorflow, PyTorch, CNTK) and solid knowledge of theory and practice. Practical and professional experience contributing to and maintaining a large code base with code versioning systems such as Git. Knowledge of supply chain models, operations research techniques, optimization modelling and solvers. Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.

Posted 2 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Summary Strong analytical and problem-solving skills Extensive coding experience in Java and Advance Java programming Good understanding of software development lifecycle using Agile/Waterfall models Good knowledge on object-oriented analysis and design Experience in source control, versioning, branching etc. Good understanding of fundamental design principles and coding standards. Extensive experience in Automation testing and tooling 4 to 8 Years of experience in application development using Java, J2EE, Microservices. Key Responsibilities Business Understand the bank priorities on the strategic initiatives and on the new programs planned further Processes Adhere to ADO principles and guidelines on all Program delivery. Compliance on ICS guidelines, Security and Data protection Compliant to SDF/TDA/ADO process and drive bank towards automating process areas removing redundancies Governance Must be aware of the Group’s regulatory framework and is expected to adhere based on the role. Must understand the oversight and controls related to Business Unit, Job Function and deliver. Regulatory & Business Conduct Display exemplary conduct and live by the Group’s Values and Code of Conduct. Take personal responsibility for embedding the highest standards of ethics, including regulatory and business conduct, across Standard Chartered Bank. This includes understanding and ensuring compliance with, in letter and spirit, all applicable laws, regulations, guidelines and the Group Code of Conduct. Effectively and collaboratively identify, escalate, mitigate and resolve risk, conduct and compliance matters Key stakeholders CEE Hive ITO, CEE Engineering Team, Application Delivery, PSS, Testing Other Responsibilities Manage and handle all CCIB CLDM Objectives. Qualifications Technical Competence Good knowledge about Design Patterns and Principles, Microservices Architecture. Strong hands-on experience on CI-CD pattern with good knowledge on related tools like GIT, ADO, Jenkins, OpenShift, Kubernetes, Docker and automation test tool like JMeter, SoapUI. Good knowledge on API building (Web Service, SOAP/REST). Good knowledge on multi-threading and multi-processing implementations. Good knowledge in dependency injections like Spring DI/Blueprints and JSON libraries like Jackson/GSON Good knowledge in Linux Operating System (Preferably RHEL). Expertise in RDBMS solutions (Oracle, PostgreSQL) & NoSQL offerings (Cassandra, MongoDB, etc) Strong programming and hands-on skills in Java. Strong programming and hands-on skills in Python. Strong experience in open-source frameworks like Spring, Hibernate, Transaction Management and Apache Libraries (Camel/ActiveMQ/Commons). Good Understanding code quality tools like SonarQube, AppScan, AQUA. Strong experience on Unit testing and code coverage using JUnit/Mockito. Good to Have Experience in application development for Client Due Diligence (CDD), CRA, On-boarding, FATCA & CRS Good knowledge on Cloud native application development, and knowledge of Cloud computing services. CDD process awareness including AML, KYC and Screening Enhance and improve CDD related processes. Role Specific Technical Competencies Java, J2EE, Spring Boot, Microservices Python, HiveQL OCP, Kubernetes PL/SQL Programming, RDBMS Devops Tools React JS Java, J2EE, Spring Boot, Microservices About Standard Chartered We're an international bank, nimble enough to act, big enough for impact. For more than 170 years, we've worked to make a positive difference for our clients, communities, and each other. We question the status quo, love a challenge and enjoy finding new opportunities to grow and do better than before. If you're looking for a career with purpose and you want to work for a bank making a difference, we want to hear from you. You can count on us to celebrate your unique talents and we can't wait to see the talents you can bring us. Our purpose, to drive commerce and prosperity through our unique diversity, together with our brand promise, to be here for good are achieved by how we each live our valued behaviours. When you work with us, you'll see how we value difference and advocate inclusion. Together We Do the right thing and are assertive, challenge one another, and live with integrity, while putting the client at the heart of what we do Never settle, continuously striving to improve and innovate, keeping things simple and learning from doing well, and not so well Are better together, we can be ourselves, be inclusive, see more good in others, and work collectively to build for the long term What We Offer In line with our Fair Pay Charter, we offer a competitive salary and benefits to support your mental, physical, financial and social wellbeing. Core bank funding for retirement savings, medical and life insurance, with flexible and voluntary benefits available in some locations. Time-off including annual leave, parental/maternity (20 weeks), sabbatical (12 months maximum) and volunteering leave (3 days), along with minimum global standards for annual and public holiday, which is combined to 30 days minimum. Flexible working options based around home and office locations, with flexible working patterns. Proactive wellbeing support through Unmind, a market-leading digital wellbeing platform, development courses for resilience and other human skills, global Employee Assistance Programme, sick leave, mental health first-aiders and all sorts of self-help toolkits A continuous learning culture to support your growth, with opportunities to reskill and upskill and access to physical, virtual and digital learning. Being part of an inclusive and values driven organisation, one that embraces and celebrates our unique diversity, across our teams, business functions and geographies - everyone feels respected and can realise their full potential.

Posted 2 weeks ago

Apply

0 years

0 Lacs

Greater Hyderabad Area

On-site

Area(s) of responsibility 1Strong proficiency in JavaScript, including DOM manipulation and the JavaScript object model Thorough understanding of React.js and its core principles Experience with HTML 5/ CSS3 Experience with popular React.js workflows (such as Flux or Redux) Familiarity with newer specifications of EcmaScript Experience with data structure libraries (e.g., Immutable.js) Knowledge of isomorphic React is a plus Familiarity with RESTful APIs Knowledge of modern authorization mechanisms, such as JSON Web Token Familiarity with modern front-end build pipelines and tools Experience with common front-end development tools such as Babel, Webpack, NPM, etc. Familiarity with code versioning tools {​{such as Git, SVN, and Mercurial} }

Posted 2 weeks ago

Apply

1.0 - 3.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Donaldson is committed to solving the world’s most complex filtration challenges. Together, we make cool things. As an established technology and innovation leader, we are continuously evolving to meet the filtration needs of our changing world. Join a culture of collaboration and innovation that matters and a chance to learn, effect change, and make meaningful contributions at work and in communities. We are seeking a skilled and motivated Data Engineer II to join the Corporate Technology Data Engineering Team. This role is important for developing and sustaining our data infrastructure, which supports a wide range of R&D, sensor-based, and modeling technologies. The Data Engineer II will design and maintain pipelines that enable the use of complex datasets. This position directly empowers faster decision making by building trustworthy data flows and access for engineers and scientists. Primary Role Responsibilities: Develop and maintain data ingestion and transformation pipelines across on-premise and cloud platforms. Develop scalable ETL/ELT pipelines that integrate data from a variety of sources (i.e. form-based entries, SQL databases, Snowflake, SharePoint). Collaborate with data scientists, data analysts, simulation engineers and IT personnel to deliver data engineering and predictive data analytics projects. Implement data quality checks, logging, and monitoring to ensure reliable operations. Follow and maintain data versioning, schema evolution, and governance controls and guidelines. Help administer Snowflake environments for cloud analytics. Work with more senior staff to improve solution architectures and automation. Stay updated with the latest data engineering technologies and trends. Participate in code reviews and knowledge sharing sessions. Participate in and plan new data projects that impact business and technical domains. Required Qualifications & Relevant Experience: Bachelor’s or master’s degree in computer science, data engineering, or related field. 1-3 years of experience in data engineering, ETL/ELT development, and/or backend software engineering. Demonstrated expertise in Python and SQL. Demonstrated experience working with data lakes and/or data warehouses (e.g. Snowflake, Databricks, or similar) Familiarity with source control and development practices (e.g Git, Azure DevOps) Strong problem-solving skills and eagerness to work with cross-functional globalized teams. Preferred Qualifications: Required qualification plus Working experience and knowledge of scientific and R&D workflows, including simulation data and LIMS systems. Demonstrated ability to balance operational support and longer-term project contributions. Experience with Java Strong communication and presentation skills. Motivated and self-driven learner Employment opportunities for positions in the United States may require use of information which is subject to the export control regulations of the United States. Hiring decisions for such positions are required by law to be made in compliance with these regulations. Applicants for employment opportunities in other countries must be able to meet the comparable export control requirements of that country and of the United States. Donaldson Company has been made aware that there are several recruiting scams that are targeting job seekers. These scams have attempted to solicit money for job applications and/or collect confidential information, Donaldson will never solicit money during the application or recruiting process. Donaldson only accepts online applications through our Careers | Donaldson Company, Inc. website and any communication from a Donaldson recruiter would be sent using a donaldson.com email address. If you have any questions about the legitimacy of an employment opportunity, please reach out to talentacquisition@donaldson.com to verify that the communication is from Donaldson. Our policy is to provide equal employment opportunities to all qualified persons without regard to race, gender, color, disability, national origin, age, religion, union affiliation, sexual orientation, veteran status, citizenship, gender identity and/or expression, or other status protected by law.

Posted 2 weeks ago

Apply

0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Band- C1/C2 The Feature Engineers Will Be Focused On Designing and implementing production-grade features from enterprise-scale data Working closely with product teams and data scientists to translate hypotheses into reusable features Building, testing, and versioning features with appropriate lineage and documentation Supporting reuse across teams, with an emphasis on stability and performance Contributing to the evolution of the feature store and feature library architecture

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

We are looking for a skilled and passionate Frontend Engineer with strong Next.js experience to join our development team. You will be responsible for building fast, scalable, and user-friendly web applications while working closely with designers, backend engineers, and product managers. This is a great opportunity to work on modern web technologies and contribute to high-impact projects. Job Description: Roles & Responsibilities Develop, test, and maintain responsive web applications using Next.js, React, and modern JavaScript (ES6+). Translate Figma or other design files into pixel-perfect, accessible, and high-performance UIs. Collaborate with cross-functional teams to define, design, and ship new features. Ensure the technical feasibility of UI/UX designs. Optimize applications for maximum speed and scalability. Write clean, maintainable, and well-documented code. Integrate frontend with APIs and third-party services. Conduct code reviews, identify performance bottlenecks, and suggest improvements. Stay up to date with the latest frontend trends and best practices. Participate in Agile ceremonies such as sprint planning, stand-ups, and retrospectives. Qualifications & Skills Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent work experience). 3 to 5 years of experience as a frontend developer. Strong command of HTML5, CSS3 (including SCSS/Tailwind), JavaScript, and TypeScript. Proven experience with React.js and Next.js frameworks. Solid understanding of server-side rendering (SSR) and static site generation (SSG) in Next.js. Experience with API integration (REST and GraphQL). Familiarity with state management libraries like Redux, Zustand, or Context API. Understanding of responsive design and cross-browser compatibility. Proficient with Git, CI/CD workflows, and code versioning tools. Experience with frontend testing frameworks (e.g., Jest, React Testing Library) is a plus. Knowledge of SEO best practices in a Next.js environment is desirable. Good communication skills and the ability to work in a collaborative team environment. Nice to Have Experience with headless CMSs (e.g., Contentful, Strapi, Sanity). Familiarity with analytics tools (e.g., Google Analytics, Segment). Exposure to backend development or DevOps is a plus. Experience working in Agile/Scrum environments. Location: Pune Brand: Merkle Time Type: Full time Contract Type: Permanent

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Leadership and Strategy Define and execute the data science roadmap aligned with business goals Lead a team of data scientists and full stack developers, fostering a high-performance, collaborative culture Partner with product, engineering, and business stakeholders to identify opportunities for data-driven innovation Technical Execution Architect and oversee the development of full stack solutions integrating machine learning models into production systems Guide the design and implementation of data pipelines, APIs, and dashboards using modern frameworks (e.g., Python, React, Node.JS) Ensure best practices in model development, versioning, testing, and deployment (MLOps) Project and People Management Manage project timelines, resource allocation, and delivery milestones Mentor team members on both data science methodologies and software engineering principles Drive continuous improvement in team processes, tools, and technical capabilities Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Master’s or PhD in Computer Science, Data Science, Engineering, or a related field 8+ years of experience in data science and software development, with 3+ years in a leadership role Proven experience with full stack development (React, Node.js, Python, REST APIs) Experience deploying models into production environments and managing lifecycle (CI or CD, Docker, Kubernetes) Solid background in machine learning, statistical modeling, and data engineering Preferred Qualifications Experience with big data technologies (Spark, Hadoop) Familiarity with cloud platforms (AWS, Azure, GCP) Exposure to healthcare, logistics, or consumer analytics domains At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.

Posted 2 weeks ago

Apply

0 years

0 Lacs

India

Remote

Freelance | Remote Opportunity We are seeking an experienced and highly motivated SAP BTP Developer - CAP to lead and support the end-to-end implementation of the SAP Emarsys Project, with a focus on integration with SAP S/4HANA Private Cloud. The ideal candidate will possess a strong understanding of SAP BTP, Pro-code SAP Build Code/CAP, Node.js, KYMA, and Low-Code. What You'll Do: Design and implement side-by-side extensions for SAP Sales and Service Cloud V2 using SAP BTP Develop low-code apps using SAP Build Apps for web and mobile use cases Build pro-code extensions using SAP Build Code / CAP (Node.js or Java) and Kyma Runtime (Kubernetes) Integrate external systems and services via OData v4 and REST APIs Extending core business processes like Lead, Opportunity, Account, and Ticket management using scalable services Secure applications using OAuth 2.0 with SAP IAS (Identity Authentication Service) Collaborate with SAP functional teams to identify use cases for extensions Contribute to best practices for versioning, CI/CD, deployment, and testing of BTP applications Nice to have Low-Code (SAP Build Apps) Experience building mobile or web apps on SAP Build Apps (or similar low-code platforms) Knowledge of app logic design, API integration, visual interface building, and deployment on SAP BTP Pro-Code (SAP Build Code / Kyma) Proficiency in SAP Cloud Application Programming (CAP) model using Node.js or Java Experience with SAP BTP Cloud Foundry and Kyma/Kubernetes runtimes Strong understanding of OData v4 and REST APIs — both consumption and exposure Hands-on experience with authentication and authorization using OAuth 2.0 / IAS What We Offer: Fully Remote Work – work from anywhere! Commitment: Up to 4–5 hours per day Duration: 6–9 months with strong potential for extension based on performance and project needs Powered by JazzHR 7zUuIRuVof

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies