Jobs
Interviews

720 Rollback Jobs - Page 8

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

12.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Title: VP-Digital Expert Support Lead Experience : 12 + Years Location : Pune Position Overview The Digital Expert Support Lead is a senior-level leadership role responsible for ensuring the resilience, scalability, and enterprise-grade supportability of AI-powered expert systems deployed across key domains like Wholesale Banking, Customer Onboarding, Payments, and Cash Management . This role requires technical depth, process rigor, stakeholder fluency , and the ability to lead cross-functional squads that ensure seamless operational performance of GenAI and digital expert agents in production environments. The candidate will work closely with Engineering, Product, AI/ML, SRE, DevOps, and Compliance teams to drive operational excellence and shape the next generation of support standards for AI-driven enterprise systems. Role-Level Expectations Functionally accountable for all post-deployment support and performance assurance of digital expert systems. Operates at L3+ support level , enabling L1/L2 teams through proactive observability, automation, and runbook design. Leads stability engineering squads , AI support specialists, and DevOps collaborators across multiple business units. Acts as the bridge between operations and engineering , ensuring technical fixes feed into product backlog effectively. Supports continuous improvement through incident intelligence, root cause reporting, and architecture hardening . Sets the support governance framework (SLAs/OLAs, monitoring KPIs, downtime classification, recovery playbooks). Position Responsibilities Operational Leadership & Stability Engineering Own the production health and lifecycle support of all digital expert systems across onboarding, payments, and cash management. Build and govern the AI Support Control Center to track usage patterns, failure alerts, and escalation workflows. Define and enforce SLAs/OLAs for LLMs, GenAI endpoints, NLP components, and associated microservices. Establish and maintain observability stacks (Grafana, ELK, Prometheus, Datadog) integrated with model behavior. Lead major incident response and drive cross-functional war rooms for critical recovery. Ensure AI pipeline resilience through fallback logic, circuit breakers, and context caching. Review and fine-tune inference flows, timeout parameters, latency thresholds, and token usage limits. Engineering Collaboration & Enhancements Drive code-level hotfixes or patches in coordination with Dev, QA, and Cloud Ops. Implement automation scripts for diagnosis, log capture, reprocessing, and health validation. Maintain well-structured GitOps pipelines for support-related patches, rollback plans, and enhancement sprints. Coordinate enhancement requests based on operational analytics and feedback loops. Champion enterprise integration and alignment with Core Banking, ERP, H2H, and transaction processing systems. Governance, Planning & People Leadership Build and mentor a high-caliber AI Support Squad – support engineers, SREs, and automation leads. Define and publish support KPIs , operational dashboards, and quarterly stability scorecards. Present production health reports to business, engineering, and executive leadership. Define runbooks, response playbooks, knowledge base entries, and onboarding plans for newer AI support use cases. Manage relationships with AI platform vendors, cloud ops partners, and application owners. Must-Have Skills & Experience 12+ years of software engineering, platform reliability, or AI systems management experience. Proven track record of leading support and platform operations for AI/ML/GenAI-powered systems . Strong experience with cloud-native platforms (Azure/AWS), Kubernetes , and containerized observability . Deep expertise in Python and/or Java for production debugging and script/tooling development. Proficient in monitoring, logging, tracing, and alerts using enterprise tools (Grafana, ELK, Datadog). Familiarity with token economics , prompt tuning, inference throttling, and GenAI usage policies. Experience working with distributed systems, banking APIs, and integration with Core/ERP systems . Strong understanding of incident management frameworks (ITIL) and ability to drive postmortem discipline . Excellent stakeholder management, cross-functional coordination, and communication skills. Demonstrated ability to mentor senior ICs and influence product and platform priorities. Nice-to-Haves Exposure to enterprise AI platforms like OpenAI, Azure OpenAI, Anthropic, or Cohere. Experience supporting multi-tenant AI applications with business-driven SLAs. Hands-on experience integrating with compliance and risk monitoring platforms. Familiarity with automated root cause inference or anomaly detection tooling. Past participation in enterprise architecture councils or platform reliability forums

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Greater Vadodara Area

On-site

About Navaera Navaera Software Services Pvt. Ltd. (NSS), established in 2014, is the technology and engineering hub of Navaera Worldwide, LLC. With a focus on innovation, NSS delivers mission-critical software development and production-grade support to global enterprise clients. We specialize in scalable software systems, platform support, and continuous integration pipelines designed to drive digital transformation across industries. Role Overview We are seeking a Senior Client Service Technical Support Engineer with deep experience in enterprise application support, production systems triage, and technical issue resolution. This is a hands-on engineering role where youll be expected to operate at the intersection of systems architecture, data integrity, incident response, and customer satisfaction. You will own escalations across multiple technology stacks and deliver solutions that are not only technically sound but aligned with critical business KPIs. The ideal candidate is technically fluent in Java-based architectures, relational databases, and enterprise-grade monitoring and support automation frameworks, with strong communication skills to interact with developers, business users, and technical leadership. Key Responsibilities System Support & Incident Management Serve as the escalation point for L2/L3 support across client-facing platforms; perform advanced diagnostics, code-level triage, and issue remediation in real time. Work with Java/J2EE-based applications and microservices in a distributed cloud-native environment; analyze application logs, JVM threads, and memory utilization. Leverage observability tools (e.g., ELK Stack, Prometheus/Grafana, AppDynamics, New Relic) for proactive monitoring and troubleshooting. Manage incident lifecycles using ITIL principles; coordinate with engineering and DevOps teams for RCA, resolution, and post-mortem reporting. Business & Technical Bridge Translate high-level business requirements into actionable support workflows and system change requests. Convert complex error scenarios into user-understandable communication for non-technical stakeholders. Identify recurring issues and implement knowledge-based automation or runbooks to reduce mean-time-to-resolution (MTTR). System Configuration & Change Management Modify business configurations and service logic within enterprise applications, using system admin consoles, config files, or APIs. Collaborate with solution architects to assess system impacts, data flow implications, and service integrations. Support CI/CD pipelines by validating deployments and performing rollback procedures when required. Data Analytics & Reporting Use SQL and database profiling tools (e.g., pg_stat_statements, EXPLAIN ANALYZE) to analyze system behavior and recommend query optimization or indexing strategies. Build data models for operational analytics to support executive dashboards and SLA reporting. Perform data integrity checks, data migration audits, and reconciliation between upstream/downstream systems. System Enhancements & UAT Participate in agile ceremonies (backlog grooming, sprint reviews) to provide production insight during feature design. Develop detailed test plans and lead User Acceptance Testing (UAT) across environments before feature rollouts. Validate system integration points, API responses, and authentication mechanisms in test and staging environments. Technical Skill Requirements Core Technologies : Languages : Java (support/debug level), Shell scripting, SQL Databases : PostgreSQL, MySQL, or equivalent RDBMS Operating Systems : Linux (Ubuntu/CentOS), Windows Server Monitoring/Logging : ELK Stack, Grafana, Prometheus, AppDynamics, Splunk Version Control & DevOps : GitHub/GitLab, Jenkins, Docker (basic), Jira/Confluence Tools & Frameworks ITSM Tools (e.g., ServiceNow, Freshservice) ETL/Data sync tools (e.g., Apache NiFi, custom scripts) API Debugging Tools (Postman, Swagger) CI/CD awareness for deployment validation and rollback handling Qualifications & Experience : Qualifications : 3+ years of technical support experience in a high-availability, enterprise-grade software environment Strong understanding of distributed systems and service-oriented architecture (SOA) Proven ability to perform root cause analysis and deliver technical resolutions under pressure Advanced knowledge of SQL and relational data models Hands-on experience supporting Java-based applications and Qualifications : Bachelors degree in Computer Science, Engineering, or equivalent technical discipline US B1 Visa (mandatory for travel/onsite client interactions) 1+ years of experience in data analytics (Power BI, Tableau, or custom SQL dashboards) 1+ years of project coordination or agile delivery exposure Working knowledge of scripting (Python, Bash) for automation or data analysis Soft Skills & Attributes Exceptional communication skills : able to articulate root cause and resolution to both technical and non-technical audiences Strong analytical mindset with a continuous improvement attitude Self-starter with the ability to work independently in a fast-paced, SLA-driven environment Proven track record of balancing multiple priorities and delivering high-quality results under pressure (ref:hirist.tech)

Posted 3 weeks ago

Apply

0 years

0 Lacs

Secunderābād, Telangana, India

On-site

About Us JOB DESCRIPTION SBI Card is a leading pure-play credit card issuer in India, offering a wide range of credit cards to cater to diverse customer needs. We are constantly innovating to meet the evolving financial needs of our customers, empowering them with digital currency for seamless payment experience and indulge in rewarding benefits. At SBI Card, the motto 'Make Life Simple' inspires every initiative, ensuring that customer convenience is at the forefront of all that we do. We are committed to building an environment where people can thrive and create a better future for everyone. SBI Card is proud to be an equal opportunity & inclusive employer and welcome employees without any discrimination on the grounds of race, color, gender, religion, creed, disability, sexual orientation, gender identity, marital status, caste etc. SBI Card is committed to fostering an inclusive and diverse workplace where all employees are treated equally with dignity and respect which makes it a promising place to work. Join us to shape the future of digital payment in India and unlock your full potential. What’s In It For YOU SBI Card truly lives by the work-life balance philosophy. We offer a robust wellness and wellbeing program to support mental and physical health of our employees Admirable work deserves to be rewarded. We have a well curated bouquet of rewards and recognition program for the employees Dynamic, Inclusive and Diverse team culture Gender Neutral Policy Inclusive Health Benefits for all - Medical Insurance, Personal Accidental, Group Term Life Insurance and Annual Health Checkup, Dental and OPD benefits Commitment to the overall development of an employee through comprehensive learning & development framework Role Purpose Responsible for the management of all collections processes for allocated portfolio in the assigned CD/Area basis targets set for resolution, normalization, rollback/absolute recovery and ROR. Role Accountability Conduct timely allocation of portfolio to aligned vendors/NFTEs and conduct ongoing reviews to drive performance on the business targets through an extended team of field executives and callers Formulate tactical short term incentive plans for NFTEs to increase productivity and drive DRR Ensure various critical segments as defined by business are reviewed and performance is driven on them Ensure judicious use of hardship tools and adherence to the settlement waivers both on rate and value Conduct ongoing field visits on critical accounts and ensure proper documentation in Collect24 system of all field visits and telephone calls to customers Raise red flags in a timely manner basis deterioration in portfolio health indicators/frauds and raise timely alarms on critical incidents as per the compliance guidelines Ensure all guidelines mentioned in the SVCL are adhered to and that process hygiene is maintained at aligned agencies Ensure 100% data security using secured data transfer modes and data purging as per policy Ensure all customer complaints received are closed within time frame Conduct thorough due diligence while onboarding/offboarding/renewing a vendor and all necessary formalities are completed prior to allocating Ensure agencies raise invoices timely Monitor NFTE ACR CAPE as per the collection strategy Measures of Success Portfolio Coverage Resolution Rate Normalization/Roll back Rate Settlement waiver rate Absolute Recovery Rupee collected NFTE CAPE DRA certification of NFTEs Absolute Customer Complaints Absolute audit observations Process adherence as per MOU Technical Skills / Experience / Certifications Credit Card knowledge along with good understanding of Collection Processes Competencies critical to the role Analytical Ability Stakeholder Management Problem Solving Result Orientation Process Orientation Qualification Post-Graduate / Graduate in any discipline Preferred Industry FSI

Posted 3 weeks ago

Apply

0 years

0 Lacs

Lucknow, Uttar Pradesh, India

On-site

About Us JOB DESCRIPTION SBI Card is a leading pure-play credit card issuer in India, offering a wide range of credit cards to cater to diverse customer needs. We are constantly innovating to meet the evolving financial needs of our customers, empowering them with digital currency for seamless payment experience and indulge in rewarding benefits. At SBI Card, the motto 'Make Life Simple' inspires every initiative, ensuring that customer convenience is at the forefront of all that we do. We are committed to building an environment where people can thrive and create a better future for everyone. SBI Card is proud to be an equal opportunity & inclusive employer and welcome employees without any discrimination on the grounds of race, color, gender, religion, creed, disability, sexual orientation, gender identity, marital status, caste etc. SBI Card is committed to fostering an inclusive and diverse workplace where all employees are treated equally with dignity and respect which makes it a promising place to work. Join us to shape the future of digital payment in India and unlock your full potential. What’s In It For YOU SBI Card truly lives by the work-life balance philosophy. We offer a robust wellness and wellbeing program to support mental and physical health of our employees Admirable work deserves to be rewarded. We have a well curated bouquet of rewards and recognition program for the employees Dynamic, Inclusive and Diverse team culture Gender Neutral Policy Inclusive Health Benefits for all - Medical Insurance, Personal Accidental, Group Term Life Insurance and Annual Health Checkup, Dental and OPD benefits Commitment to the overall development of an employee through comprehensive learning & development framework Role Purpose Responsible for delivering on business metrics of portfolio resolution, norm, rollback and extraction/absolute recovery and ROR as per business operating plan through a team of Agency managers and Collection Vendors. Role Accountability Devise vendor allocation strategy for the CD/region and ensure appropriate capacity addition basis future business inflows in line with ACR guidelines Ensure adequate legal interventions on the portfolio Ensure various critical segments as defined by business are reviewed and performance is driven on them Conduct regular performance review with Vendors and Area collection managers for all critical metrics to track the portfolio health and performance trends Ensure judicious use of hardship tools and adherence to the settlement waivers both on rate and value Conduct ongoing field visits on critical accounts and ensure proper documentation in Collect24 system of all field visits and telephone calls to customers Raise red flags in a timely manner basis deterioration in portfolio health indicators/frauds and raise timely alarms on critical incidents as per the compliance guidelines Reinforce compliance standards with area collection managers and vendors to drive adherence to code of conduct Ensure all guidelines mentioned in the SVCL are adhered to and that process hygiene is maintained at aligned agencies Ensure all customer complaints received are closed within time frame Conduct thorough due diligence while onboarding/offboarding/renewing a vendor and all necessary formalities are completed prior to allocating Ensure monthly cost provisions are reported as per timelines Identify upcoming markets in accordance with the Sales growth plan and evaluate setting up/expanding operations basis volumes In cases pertaining to Banca delinquencies, collaborate with partner bank branches in respective locations to track customers Measures of Success Portfolio Coverage Resolution Rate Normalization/Roll back Rate Settlement waiver rate Absolute Recovery Settlement waiver rate Extraction Rate ACM CAPE ROR Regulatory Customer complaint % Vendor SVCL Audit adherence Process adherence as per MOU Technical Skills / Experience / Certifications Credit Card knowledge along with good understanding of Collection Processes Competencies critical to the role Analytical Ability Stakeholder Management Problem Solving Result Orientation Process Orientation Qualification Post-Graduate / Graduate in any discipline Preferred Industry FSI

Posted 3 weeks ago

Apply

2.0 - 31.0 years

3 - 9 Lacs

Gurgaon/Gurugram

On-site

Job Title: Manager – Collections (Telecalling Channel Management) Department: Collections Industry: Credit Cards / Financial Services About SBI Card SBI Card is a leading pure-play credit card issuer in India, committed to empowering customers with innovative and rewarding digital payment solutions. With our core philosophy of "Make Life Simple," we aim to enhance every customer interaction while fostering a vibrant, inclusive, and performance-driven workplace. We are proud to be an equal opportunity employer and strive to maintain a culture that celebrates diversity, inclusivity, and respect for all. What’s In It For You: Flexible work-life balance supported by wellness and mental health programs Comprehensive rewards and recognition framework Inclusive team culture with gender-neutral policies Health coverage that includes medical, accidental, dental, OPD, and more Learning & development frameworks for continuous career growth Role Purpose: To manage and drive the performance of telecalling channel partners handling collections across the assigned portfolio, ensuring operational efficiency, regulatory compliance, and achievement of key collection metrics. Key Responsibilities: Channel Partner & Portfolio Management Execute and monitor the collection strategy for the assigned site. Manage vendor capacity planning, resource optimization, and process effectiveness. Drive performance through structured reviews, portfolio segmentation, and strategic dialer management. Oversee agent-level performance including daily call audits, compliance checks, and productivity metrics. Operations & Compliance Ensure system uptime for CRM/Dialer and telecom infrastructure in coordination with internal teams. Conduct regular call quality reviews, sample evaluations, and agent feedback sessions. Track agent behavior trends – absenteeism, login issues, quality deviations – and drive remedial actions. Risk & Legal Recovery Support Identify accounts for legal recovery channels like arbitration, Lok Adalat, etc. Monitor field referrals based on segmentation strategy and historical performance. Ensure training & certifications of telecalling staff per compliance guidelines. Process Excellence & Reporting Maintain oversight on agency payouts, SLA compliance, and billing accuracy. Perform regular spot audits to ensure data security and adherence to internal controls. Analyze daily performance action codes and derive insights for corrective action. Measures of Success: Resolution Rate / Normalization Rate / Rollback Rate KP Target Achievement Money Collected / PLI Penetration / Tele Retention Rate NFTE Productivity and Training Coverage Customer Complaints Volume Vendor SLA Adherence Audit Observations (Zero non-compliance) MOU & Process Adherence Technical Skills / Experience Required: Strong knowledge of credit card products and collections operations Experience in managing large, distributed telecalling vendor teams Hands-on understanding of dialer strategies and contact center tools Competencies Critical to the Role: Stakeholder Management Result Orientation Process and Compliance Orientation Analytical Thinking & Problem Solving Qualifications: Graduate or Postgraduate in any discipline Preferred Industry Experience: Credit Card / Consumer Lending / Financial Services

Posted 3 weeks ago

Apply

11.0 - 14.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

JOB DESCRIPTION Roles & responsibilities Here are some of the key responsibilities of AI Research Scientist: Develop the overarching technical vision for AI systems that support both current and future business needs. Architect end-to-end AI applications, ensuring integration with legacy systems, enterprise data platforms, and microservices. Work closely with business analysts and domain experts to translate business objectives into technical requirements and AI-driven solutions and applications. Partner with product management to design agile project roadmaps, aligning technical strategy with market needs. Coordinate with data engineering teams to ensure smooth data flows, quality, and governance across data sources. Lead the design of reference architectures, roadmaps, and best practices for AI applications. Evaluate emerging technologies and methodologies, recommending proven innovations that can be integrated into the organizational strategy. Identify and define system components such as data ingestion pipelines, model training environments, continuous integration/continuous deployment (CI/CD) frameworks, and monitoring systems. Utilize containerization (Docker, Kubernetes) and cloud services to streamline the deployment and scaling of AI systems. Implement robust versioning, rollback, and monitoring mechanisms that ensure system stability, reliability, and performance. Ensure the architecture supports scalability, reliability, maintainability, and security best practices. Project Management: You will oversee the planning, execution, and delivery of AI and ML applications, ensuring that they are completed within budget and timeline constraints. This includes project management defining project goals, allocating resources, and managing risks. Oversee the lifecycle of AI application development—from conceptualization and design to development, testing, deployment, and post-PROD optimization. Enforce security best practices during each phase of development, with a focus on data privacy, user security, and risk mitigation. Provide mentorship to engineering teams and foster a culture of continuous learning. Lead technical knowledge-sharing sessions and workshops to keep teams up-to-date on the latest advances in generative AI and architectural best practices. Mandatory technical & functional skills The ideal candidate should have a strong background in working or developing agents using langgraph, autogen, and CrewAI. Proficiency in Python, with robust knowledge of machine learning libraries and frameworks such as TensorFlow, PyTorch, and Keras. Proven experience with cloud computing platforms (AWS, Azure, Google Cloud Platform) for building and deploying scalable AI solutions. Hands-on skills with containerization (Docker) and orchestration frameworks (Kubernetes), including related DevOps tools like Jenkins and GitLab CI/CD. Experience using Infrastructure as Code (IaC) tools such as Terraform or CloudFormation to automate cloud deployments. Proficient in SQL and NoSQL databases (e.g., PostgreSQL, MongoDB, Cassandra) to manage structured and unstructured data. Expertise in designing distributed systems, RESTful APIs, GraphQL integrations, and microservices architecture. - Knowledge of event-driven architectures and message brokers (e.g., RabbitMQ, Apache Kafka) to support robust inter-system communications. Preferred Technical & Functional Skills Experience with monitoring and logging tools (e.g., Prometheus, Grafana, ELK Stack) to ensure system reliability and operational performance. Familiarity with cutting-edge libraries such as Hugging Face Transformers, OpenAI’s API integrations, and other domain-specific tools. Large scale deployment of ML projects, with good understanding of DevOps /MLOps /LLM Ops Training and fine tuning of Large Language Models or SLMs (PALM2, GPT4, LLAMA etc ) Key behavioral attributes/requirements Ability to mentor junior developers Ability to own project deliverables and contribute towards risk mitigation Understand business objectives and functions to support data needs RESPONSIBILITIES Roles & responsibilities Here are some of the key responsibilities of AI architect: Develop the overarching technical vision for AI systems that support both current and future business needs. Architect end-to-end AI applications, ensuring integration with legacy systems, enterprise data platforms, and microservices. Work closely with business analysts and domain experts to translate business objectives into technical requirements and AI-driven solutions and applications. Partner with product management to design agile project roadmaps, aligning technical strategy with market needs. Coordinate with data engineering teams to ensure smooth data flows, quality, and governance across data sources. Lead the design of reference architectures, roadmaps, and best practices for AI applications. Evaluate emerging technologies and methodologies, recommending proven innovations that can be integrated into the organizational strategy. Identify and define system components such as data ingestion pipelines, model training environments, continuous integration/continuous deployment (CI/CD) frameworks, and monitoring systems. Utilize containerization (Docker, Kubernetes) and cloud services to streamline the deployment and scaling of AI systems. Implement robust versioning, rollback, and monitoring mechanisms that ensure system stability, reliability, and performance. Ensure the architecture supports scalability, reliability, maintainability, and security best practices. Project Management: You will oversee the planning, execution, and delivery of AI and ML applications, ensuring that they are completed within budget and timeline constraints. This includes project management defining project goals, allocating resources, and managing risks. Oversee the lifecycle of AI application development—from conceptualization and design to development, testing, deployment, and post-PROD optimization. Enforce security best practices during each phase of development, with a focus on data privacy, user security, and risk mitigation. Provide mentorship to engineering teams and foster a culture of continuous learning. Lead technical knowledge-sharing sessions and workshops to keep teams up-to-date on the latest advances in generative AI and architectural best practices. Mandatory technical & functional skills The ideal candidate should have a strong background in working or developing agents using langgraph, autogen, and CrewAI. Proficiency in Python, with robust knowledge of machine learning libraries and frameworks such as TensorFlow, PyTorch, and Keras. Proven experience with cloud computing platforms (AWS, Azure, Google Cloud Platform) for building and deploying scalable AI solutions. Hands-on skills with containerization (Docker) and orchestration frameworks (Kubernetes), including related DevOps tools like Jenkins and GitLab CI/CD. Experience using Infrastructure as Code (IaC) tools such as Terraform or CloudFormation to automate cloud deployments. Proficient in SQL and NoSQL databases (e.g., PostgreSQL, MongoDB, Cassandra) to manage structured and unstructured data. Expertise in designing distributed systems, RESTful APIs, GraphQL integrations, and microservices architecture. - Knowledge of event-driven architectures and message brokers (e.g., RabbitMQ, Apache Kafka) to support robust inter-system communications. Preferred Technical & Functional Skills Experience with monitoring and logging tools (e.g., Prometheus, Grafana, ELK Stack) to ensure system reliability and operational performance. Familiarity with cutting-edge libraries such as Hugging Face Transformers, OpenAI’s API integrations, and other domain-specific tools. Large scale deployment of ML projects, with good understanding of DevOps /MLOps /LLM Ops Training and fine tuning of Large Language Models or SLMs (PALM2, GPT4, LLAMA etc ) Key behavioral attributes/requirements Ability to mentor junior developers Ability to own project deliverables and contribute towards risk mitigation Understand business objectives and functions to support data needs #KGS QUALIFICATIONS This role is for you if you have the below Educational Qualifications Bachelor’s/Master’s degree in Computer Science Certifications in Cloud technologies (AWS, Azure, GCP) and TOGAF certification (good to have) Work experience: 11 to 14 Years of Experience #KGS

Posted 3 weeks ago

Apply

11.0 - 14.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

JOB DESCRIPTION Roles & responsibilities Here are some of the key responsibilities of AI architect: Develop the overarching technical vision for AI systems that support both current and future business needs. Architect end-to-end AI applications, ensuring integration with legacy systems, enterprise data platforms, and microservices. Work closely with business analysts and domain experts to translate business objectives into technical requirements and AI-driven solutions and applications. Partner with product management to design agile project roadmaps, aligning technical strategy with market needs. Coordinate with data engineering teams to ensure smooth data flows, quality, and governance across data sources. Lead the design of reference architectures, roadmaps, and best practices for AI applications. Evaluate emerging technologies and methodologies, recommending proven innovations that can be integrated into the organizational strategy. Identify and define system components such as data ingestion pipelines, model training environments, continuous integration/continuous deployment (CI/CD) frameworks, and monitoring systems. Utilize containerization (Docker, Kubernetes) and cloud services to streamline the deployment and scaling of AI systems. Implement robust versioning, rollback, and monitoring mechanisms that ensure system stability, reliability, and performance. Ensure the architecture supports scalability, reliability, maintainability, and security best practices. Project Management: You will oversee the planning, execution, and delivery of AI and ML applications, ensuring that they are completed within budget and timeline constraints. This includes project management defining project goals, allocating resources, and managing risks. Oversee the lifecycle of AI application development—from conceptualization and design to development, testing, deployment, and post-PROD optimization. Enforce security best practices during each phase of development, with a focus on data privacy, user security, and risk mitigation. Provide mentorship to engineering teams and foster a culture of continuous learning. Lead technical knowledge-sharing sessions and workshops to keep teams up-to-date on the latest advances in generative AI and architectural best practices. Mandatory technical & functional skills The ideal candidate should have a strong background in working or developing agents using langgraph, autogen, and CrewAI. Proficiency in Python, with robust knowledge of machine learning libraries and frameworks such as TensorFlow, PyTorch, and Keras. Proven experience with cloud computing platforms (AWS, Azure, Google Cloud Platform) for building and deploying scalable AI solutions. Hands-on skills with containerization (Docker) and orchestration frameworks (Kubernetes), including related DevOps tools like Jenkins and GitLab CI/CD. Experience using Infrastructure as Code (IaC) tools such as Terraform or CloudFormation to automate cloud deployments. Proficient in SQL and NoSQL databases (e.g., PostgreSQL, MongoDB, Cassandra) to manage structured and unstructured data. Expertise in designing distributed systems, RESTful APIs, GraphQL integrations, and microservices architecture. - Knowledge of event-driven architectures and message brokers (e.g., RabbitMQ, Apache Kafka) to support robust inter-system communications. Preferred Technical & Functional Skills Experience with monitoring and logging tools (e.g., Prometheus, Grafana, ELK Stack) to ensure system reliability and operational performance. Familiarity with cutting-edge libraries such as Hugging Face Transformers, OpenAI’s API integrations, and other domain-specific tools. Large scale deployment of ML projects, with good understanding of DevOps /MLOps /LLM Ops Training and fine tuning of Large Language Models or SLMs (PALM2, GPT4, LLAMA etc ) Key behavioral attributes/requirements Ability to mentor junior developers Ability to own project deliverables and contribute towards risk mitigation Understand business objectives and functions to support data needs RESPONSIBILITIES Roles & responsibilities Here are some of the key responsibilities of AI architect: Develop the overarching technical vision for AI systems that support both current and future business needs. Architect end-to-end AI applications, ensuring integration with legacy systems, enterprise data platforms, and microservices. Work closely with business analysts and domain experts to translate business objectives into technical requirements and AI-driven solutions and applications. Partner with product management to design agile project roadmaps, aligning technical strategy with market needs. Coordinate with data engineering teams to ensure smooth data flows, quality, and governance across data sources. Lead the design of reference architectures, roadmaps, and best practices for AI applications. Evaluate emerging technologies and methodologies, recommending proven innovations that can be integrated into the organizational strategy. Identify and define system components such as data ingestion pipelines, model training environments, continuous integration/continuous deployment (CI/CD) frameworks, and monitoring systems. Utilize containerization (Docker, Kubernetes) and cloud services to streamline the deployment and scaling of AI systems. Implement robust versioning, rollback, and monitoring mechanisms that ensure system stability, reliability, and performance. Ensure the architecture supports scalability, reliability, maintainability, and security best practices. Project Management: You will oversee the planning, execution, and delivery of AI and ML applications, ensuring that they are completed within budget and timeline constraints. This includes project management defining project goals, allocating resources, and managing risks. Oversee the lifecycle of AI application development—from conceptualization and design to development, testing, deployment, and post-PROD optimization. Enforce security best practices during each phase of development, with a focus on data privacy, user security, and risk mitigation. Provide mentorship to engineering teams and foster a culture of continuous learning. Lead technical knowledge-sharing sessions and workshops to keep teams up-to-date on the latest advances in generative AI and architectural best practices. Mandatory technical & functional skills The ideal candidate should have a strong background in working or developing agents using langgraph, autogen, and CrewAI. Proficiency in Python, with robust knowledge of machine learning libraries and frameworks such as TensorFlow, PyTorch, and Keras. Proven experience with cloud computing platforms (AWS, Azure, Google Cloud Platform) for building and deploying scalable AI solutions. Hands-on skills with containerization (Docker) and orchestration frameworks (Kubernetes), including related DevOps tools like Jenkins and GitLab CI/CD. Experience using Infrastructure as Code (IaC) tools such as Terraform or CloudFormation to automate cloud deployments. Proficient in SQL and NoSQL databases (e.g., PostgreSQL, MongoDB, Cassandra) to manage structured and unstructured data. Expertise in designing distributed systems, RESTful APIs, GraphQL integrations, and microservices architecture. - Knowledge of event-driven architectures and message brokers (e.g., RabbitMQ, Apache Kafka) to support robust inter-system communications. Preferred Technical & Functional Skills Experience with monitoring and logging tools (e.g., Prometheus, Grafana, ELK Stack) to ensure system reliability and operational performance. Familiarity with cutting-edge libraries such as Hugging Face Transformers, OpenAI’s API integrations, and other domain-specific tools. Large scale deployment of ML projects, with good understanding of DevOps /MLOps /LLM Ops Training and fine tuning of Large Language Models or SLMs (PALM2, GPT4, LLAMA etc ) Key behavioral attributes/requirements Ability to mentor junior developers Ability to own project deliverables and contribute towards risk mitigation Understand business objectives and functions to support data needs #KGS QUALIFICATIONS This role is for you if you have the below Educational Qualifications Bachelor’s/Master’s degree in Computer Science Certifications in Cloud technologies (AWS, Azure, GCP) and TOGAF certification (good to have) Work experience: 11 to 14 Years of Experience #KGS

Posted 3 weeks ago

Apply

11.0 - 14.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

JOB DESCRIPTION Roles & responsibilities Here are some of the key responsibilities of AI Product Manager: Develop the overarching technical vision for AI systems that support both current and future business needs. Architect end-to-end AI applications, ensuring integration with legacy systems, enterprise data platforms, and microservices. Work closely with business analysts and domain experts to translate business objectives into technical requirements and AI-driven solutions and applications. Partner with product management to design agile project roadmaps, aligning technical strategy with market needs. Coordinate with data engineering teams to ensure smooth data flows, quality, and governance across data sources. Lead the design of reference architectures, roadmaps, and best practices for AI applications. Evaluate emerging technologies and methodologies, recommending proven innovations that can be integrated into the organizational strategy. Identify and define system components such as data ingestion pipelines, model training environments, continuous integration/continuous deployment (CI/CD) frameworks, and monitoring systems. Utilize containerization (Docker, Kubernetes) and cloud services to streamline the deployment and scaling of AI systems. Implement robust versioning, rollback, and monitoring mechanisms that ensure system stability, reliability, and performance. Ensure the architecture supports scalability, reliability, maintainability, and security best practices. Project Management: You will oversee the planning, execution, and delivery of AI and ML applications, ensuring that they are completed within budget and timeline constraints. This includes project management defining project goals, allocating resources, and managing risks. Oversee the lifecycle of AI application development—from conceptualization and design to development, testing, deployment, and post-PROD optimization. Enforce security best practices during each phase of development, with a focus on data privacy, user security, and risk mitigation. Provide mentorship to engineering teams and foster a culture of continuous learning. Lead technical knowledge-sharing sessions and workshops to keep teams up-to-date on the latest advances in generative AI and architectural best practices. Mandatory technical & functional skills The ideal candidate should have a strong background in working or developing agents using langgraph, autogen, and CrewAI. Proficiency in Python, with robust knowledge of machine learning libraries and frameworks such as TensorFlow, PyTorch, and Keras. Proven experience with cloud computing platforms (AWS, Azure, Google Cloud Platform) for building and deploying scalable AI solutions. Hands-on skills with containerization (Docker) and orchestration frameworks (Kubernetes), including related DevOps tools like Jenkins and GitLab CI/CD. Experience using Infrastructure as Code (IaC) tools such as Terraform or CloudFormation to automate cloud deployments. Proficient in SQL and NoSQL databases (e.g., PostgreSQL, MongoDB, Cassandra) to manage structured and unstructured data. Expertise in designing distributed systems, RESTful APIs, GraphQL integrations, and microservices architecture. - Knowledge of event-driven architectures and message brokers (e.g., RabbitMQ, Apache Kafka) to support robust inter-system communications. Preferred Technical & Functional Skills Experience with monitoring and logging tools (e.g., Prometheus, Grafana, ELK Stack) to ensure system reliability and operational performance. Familiarity with cutting-edge libraries such as Hugging Face Transformers, OpenAI’s API integrations, and other domain-specific tools. Large scale deployment of ML projects, with good understanding of DevOps /MLOps /LLM Ops Training and fine tuning of Large Language Models or SLMs (PALM2, GPT4, LLAMA etc ) Key behavioral attributes/requirements Ability to mentor junior developers Ability to own project deliverables and contribute towards risk mitigation Understand business objectives and functions to support data needs RESPONSIBILITIES Roles & responsibilities Here are some of the key responsibilities of AI Product Manager: Develop the overarching technical vision for AI systems that support both current and future business needs. Architect end-to-end AI applications, ensuring integration with legacy systems, enterprise data platforms, and microservices. Work closely with business analysts and domain experts to translate business objectives into technical requirements and AI-driven solutions and applications. Partner with product management to design agile project roadmaps, aligning technical strategy with market needs. Coordinate with data engineering teams to ensure smooth data flows, quality, and governance across data sources. Lead the design of reference architectures, roadmaps, and best practices for AI applications. Evaluate emerging technologies and methodologies, recommending proven innovations that can be integrated into the organizational strategy. Identify and define system components such as data ingestion pipelines, model training environments, continuous integration/continuous deployment (CI/CD) frameworks, and monitoring systems. Utilize containerization (Docker, Kubernetes) and cloud services to streamline the deployment and scaling of AI systems. Implement robust versioning, rollback, and monitoring mechanisms that ensure system stability, reliability, and performance. Ensure the architecture supports scalability, reliability, maintainability, and security best practices. Project Management: You will oversee the planning, execution, and delivery of AI and ML applications, ensuring that they are completed within budget and timeline constraints. This includes project management defining project goals, allocating resources, and managing risks. Oversee the lifecycle of AI application development—from conceptualization and design to development, testing, deployment, and post-PROD optimization. Enforce security best practices during each phase of development, with a focus on data privacy, user security, and risk mitigation. Provide mentorship to engineering teams and foster a culture of continuous learning. Lead technical knowledge-sharing sessions and workshops to keep teams up-to-date on the latest advances in generative AI and architectural best practices. Mandatory technical & functional skills The ideal candidate should have a strong background in working or developing agents using langgraph, autogen, and CrewAI. Proficiency in Python, with robust knowledge of machine learning libraries and frameworks such as TensorFlow, PyTorch, and Keras. Proven experience with cloud computing platforms (AWS, Azure, Google Cloud Platform) for building and deploying scalable AI solutions. Hands-on skills with containerization (Docker) and orchestration frameworks (Kubernetes), including related DevOps tools like Jenkins and GitLab CI/CD. Experience using Infrastructure as Code (IaC) tools such as Terraform or CloudFormation to automate cloud deployments. Proficient in SQL and NoSQL databases (e.g., PostgreSQL, MongoDB, Cassandra) to manage structured and unstructured data. Expertise in designing distributed systems, RESTful APIs, GraphQL integrations, and microservices architecture. - Knowledge of event-driven architectures and message brokers (e.g., RabbitMQ, Apache Kafka) to support robust inter-system communications. Preferred Technical & Functional Skills Experience with monitoring and logging tools (e.g., Prometheus, Grafana, ELK Stack) to ensure system reliability and operational performance. Familiarity with cutting-edge libraries such as Hugging Face Transformers, OpenAI’s API integrations, and other domain-specific tools. Large scale deployment of ML projects, with good understanding of DevOps /MLOps /LLM Ops Training and fine tuning of Large Language Models or SLMs (PALM2, GPT4, LLAMA etc ) Key behavioral attributes/requirements Ability to mentor junior developers Ability to own project deliverables and contribute towards risk mitigation Understand business objectives and functions to support data needs #KGS QUALIFICATIONS This role is for you if you have the below Educational Qualifications Bachelor’s/Master’s degree in Computer Science Certifications in Cloud technologies (AWS, Azure, GCP) and TOGAF certification (good to have) Work experience: 11 to 14 Years of Experience #KGS

Posted 3 weeks ago

Apply

11.0 - 14.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

JOB DESCRIPTION Roles & responsibilities Here are some of the key responsibilities of AI architect: Develop the overarching technical vision for AI systems that support both current and future business needs. Architect end-to-end AI applications, ensuring integration with legacy systems, enterprise data platforms, and microservices. Work closely with business analysts and domain experts to translate business objectives into technical requirements and AI-driven solutions and applications. Partner with product management to design agile project roadmaps, aligning technical strategy with market needs. Coordinate with data engineering teams to ensure smooth data flows, quality, and governance across data sources. Lead the design of reference architectures, roadmaps, and best practices for AI applications. Evaluate emerging technologies and methodologies, recommending proven innovations that can be integrated into the organizational strategy. Identify and define system components such as data ingestion pipelines, model training environments, continuous integration/continuous deployment (CI/CD) frameworks, and monitoring systems. Utilize containerization (Docker, Kubernetes) and cloud services to streamline the deployment and scaling of AI systems. Implement robust versioning, rollback, and monitoring mechanisms that ensure system stability, reliability, and performance. Ensure the architecture supports scalability, reliability, maintainability, and security best practices. Project Management: You will oversee the planning, execution, and delivery of AI and ML applications, ensuring that they are completed within budget and timeline constraints. This includes project management defining project goals, allocating resources, and managing risks. Oversee the lifecycle of AI application development—from conceptualization and design to development, testing, deployment, and post-PROD optimization. Enforce security best practices during each phase of development, with a focus on data privacy, user security, and risk mitigation. Provide mentorship to engineering teams and foster a culture of continuous learning. Lead technical knowledge-sharing sessions and workshops to keep teams up-to-date on the latest advances in generative AI and architectural best practices. Mandatory technical & functional skills The ideal candidate should have a strong background in working or developing agents using langgraph, autogen, and CrewAI. Proficiency in Python, with robust knowledge of machine learning libraries and frameworks such as TensorFlow, PyTorch, and Keras. Proven experience with cloud computing platforms (AWS, Azure, Google Cloud Platform) for building and deploying scalable AI solutions. Hands-on skills with containerization (Docker) and orchestration frameworks (Kubernetes), including related DevOps tools like Jenkins and GitLab CI/CD. Experience using Infrastructure as Code (IaC) tools such as Terraform or CloudFormation to automate cloud deployments. Proficient in SQL and NoSQL databases (e.g., PostgreSQL, MongoDB, Cassandra) to manage structured and unstructured data. Expertise in designing distributed systems, RESTful APIs, GraphQL integrations, and microservices architecture. - Knowledge of event-driven architectures and message brokers (e.g., RabbitMQ, Apache Kafka) to support robust inter-system communications. Preferred Technical & Functional Skills Experience with monitoring and logging tools (e.g., Prometheus, Grafana, ELK Stack) to ensure system reliability and operational performance. Familiarity with cutting-edge libraries such as Hugging Face Transformers, OpenAI’s API integrations, and other domain-specific tools. Large scale deployment of ML projects, with good understanding of DevOps /MLOps /LLM Ops Training and fine tuning of Large Language Models or SLMs (PALM2, GPT4, LLAMA etc ) Key behavioral attributes/requirements Ability to mentor junior developers Ability to own project deliverables and contribute towards risk mitigation Understand business objectives and functions to support data needs RESPONSIBILITIES Roles & responsibilities Here are some of the key responsibilities of AI architect: Develop the overarching technical vision for AI systems that support both current and future business needs. Architect end-to-end AI applications, ensuring integration with legacy systems, enterprise data platforms, and microservices. Work closely with business analysts and domain experts to translate business objectives into technical requirements and AI-driven solutions and applications. Partner with product management to design agile project roadmaps, aligning technical strategy with market needs. Coordinate with data engineering teams to ensure smooth data flows, quality, and governance across data sources. Lead the design of reference architectures, roadmaps, and best practices for AI applications. Evaluate emerging technologies and methodologies, recommending proven innovations that can be integrated into the organizational strategy. Identify and define system components such as data ingestion pipelines, model training environments, continuous integration/continuous deployment (CI/CD) frameworks, and monitoring systems. Utilize containerization (Docker, Kubernetes) and cloud services to streamline the deployment and scaling of AI systems. Implement robust versioning, rollback, and monitoring mechanisms that ensure system stability, reliability, and performance. Ensure the architecture supports scalability, reliability, maintainability, and security best practices. Project Management: You will oversee the planning, execution, and delivery of AI and ML applications, ensuring that they are completed within budget and timeline constraints. This includes project management defining project goals, allocating resources, and managing risks. Oversee the lifecycle of AI application development—from conceptualization and design to development, testing, deployment, and post-PROD optimization. Enforce security best practices during each phase of development, with a focus on data privacy, user security, and risk mitigation. Provide mentorship to engineering teams and foster a culture of continuous learning. Lead technical knowledge-sharing sessions and workshops to keep teams up-to-date on the latest advances in generative AI and architectural best practices. Mandatory technical & functional skills The ideal candidate should have a strong background in working or developing agents using langgraph, autogen, and CrewAI. Proficiency in Python, with robust knowledge of machine learning libraries and frameworks such as TensorFlow, PyTorch, and Keras. Proven experience with cloud computing platforms (AWS, Azure, Google Cloud Platform) for building and deploying scalable AI solutions. Hands-on skills with containerization (Docker) and orchestration frameworks (Kubernetes), including related DevOps tools like Jenkins and GitLab CI/CD. Experience using Infrastructure as Code (IaC) tools such as Terraform or CloudFormation to automate cloud deployments. Proficient in SQL and NoSQL databases (e.g., PostgreSQL, MongoDB, Cassandra) to manage structured and unstructured data. Expertise in designing distributed systems, RESTful APIs, GraphQL integrations, and microservices architecture. - Knowledge of event-driven architectures and message brokers (e.g., RabbitMQ, Apache Kafka) to support robust inter-system communications. Preferred Technical & Functional Skills Experience with monitoring and logging tools (e.g., Prometheus, Grafana, ELK Stack) to ensure system reliability and operational performance. Familiarity with cutting-edge libraries such as Hugging Face Transformers, OpenAI’s API integrations, and other domain-specific tools. Large scale deployment of ML projects, with good understanding of DevOps /MLOps /LLM Ops Training and fine tuning of Large Language Models or SLMs (PALM2, GPT4, LLAMA etc ) Key behavioral attributes/requirements Ability to mentor junior developers Ability to own project deliverables and contribute towards risk mitigation Understand business objectives and functions to support data needs #KGS QUALIFICATIONS This role is for you if you have the below Educational Qualifications Bachelor’s/Master’s degree in Computer Science Certifications in Cloud technologies (AWS, Azure, GCP) and TOGAF certification (good to have) Work experience: 11 to 14 Years of Experience #KGS

Posted 3 weeks ago

Apply

11.0 - 14.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

JOB DESCRIPTION Roles & responsibilities Here are some of the key responsibilities of AI Azure Architect: Develop the overarching technical vision for AI systems that support both current and future business needs. Architect end-to-end AI applications, ensuring integration with legacy systems, enterprise data platforms, and microservices. Work closely with business analysts and domain experts to translate business objectives into technical requirements and AI-driven solutions and applications. Partner with product management to design agile project roadmaps, aligning technical strategy with market needs. Coordinate with data engineering teams to ensure smooth data flows, quality, and governance across data sources. Lead the design of reference architectures, roadmaps, and best practices for AI applications. Evaluate emerging technologies and methodologies, recommending proven innovations that can be integrated into the organizational strategy. Identify and define system components such as data ingestion pipelines, model training environments, continuous integration/continuous deployment (CI/CD) frameworks, and monitoring systems. Utilize containerization (Docker, Kubernetes) and cloud services to streamline the deployment and scaling of AI systems. Implement robust versioning, rollback, and monitoring mechanisms that ensure system stability, reliability, and performance. Ensure the architecture supports scalability, reliability, maintainability, and security best practices. Project Management: You will oversee the planning, execution, and delivery of AI and ML applications, ensuring that they are completed within budget and timeline constraints. This includes project management defining project goals, allocating resources, and managing risks. Oversee the lifecycle of AI application development—from conceptualization and design to development, testing, deployment, and post-PROD optimization. Enforce security best practices during each phase of development, with a focus on data privacy, user security, and risk mitigation. Provide mentorship to engineering teams and foster a culture of continuous learning. Lead technical knowledge-sharing sessions and workshops to keep teams up-to-date on the latest advances in generative AI and architectural best practices. Mandatory technical & functional skills The ideal candidate should have a strong background in working or developing agents using langgraph, autogen, and CrewAI. Proficiency in Python, with robust knowledge of machine learning libraries and frameworks such as TensorFlow, PyTorch, and Keras. Proven experience with cloud computing platforms (AWS, Azure, Google Cloud Platform) for building and deploying scalable AI solutions. Hands-on skills with containerization (Docker) and orchestration frameworks (Kubernetes), including related DevOps tools like Jenkins and GitLab CI/CD. Experience using Infrastructure as Code (IaC) tools such as Terraform or CloudFormation to automate cloud deployments. Proficient in SQL and NoSQL databases (e.g., PostgreSQL, MongoDB, Cassandra) to manage structured and unstructured data. Expertise in designing distributed systems, RESTful APIs, GraphQL integrations, and microservices architecture. - Knowledge of event-driven architectures and message brokers (e.g., RabbitMQ, Apache Kafka) to support robust inter-system communications. Preferred Technical & Functional Skills Experience with monitoring and logging tools (e.g., Prometheus, Grafana, ELK Stack) to ensure system reliability and operational performance. Familiarity with cutting-edge libraries such as Hugging Face Transformers, OpenAI’s API integrations, and other domain-specific tools. Large scale deployment of ML projects, with good understanding of DevOps /MLOps /LLM Ops Training and fine tuning of Large Language Models or SLMs (PALM2, GPT4, LLAMA etc ) Key behavioral attributes/requirements Ability to mentor junior developers Ability to own project deliverables and contribute towards risk mitigation Understand business objectives and functions to support data needs RESPONSIBILITIES Roles & responsibilities Here are some of the key responsibilities of AI Azure Architect: Develop the overarching technical vision for AI systems that support both current and future business needs. Architect end-to-end AI applications, ensuring integration with legacy systems, enterprise data platforms, and microservices. Work closely with business analysts and domain experts to translate business objectives into technical requirements and AI-driven solutions and applications. Partner with product management to design agile project roadmaps, aligning technical strategy with market needs. Coordinate with data engineering teams to ensure smooth data flows, quality, and governance across data sources. Lead the design of reference architectures, roadmaps, and best practices for AI applications. Evaluate emerging technologies and methodologies, recommending proven innovations that can be integrated into the organizational strategy. Identify and define system components such as data ingestion pipelines, model training environments, continuous integration/continuous deployment (CI/CD) frameworks, and monitoring systems. Utilize containerization (Docker, Kubernetes) and cloud services to streamline the deployment and scaling of AI systems. Implement robust versioning, rollback, and monitoring mechanisms that ensure system stability, reliability, and performance. Ensure the architecture supports scalability, reliability, maintainability, and security best practices. Project Management: You will oversee the planning, execution, and delivery of AI and ML applications, ensuring that they are completed within budget and timeline constraints. This includes project management defining project goals, allocating resources, and managing risks. Oversee the lifecycle of AI application development—from conceptualization and design to development, testing, deployment, and post-PROD optimization. Enforce security best practices during each phase of development, with a focus on data privacy, user security, and risk mitigation. Provide mentorship to engineering teams and foster a culture of continuous learning. Lead technical knowledge-sharing sessions and workshops to keep teams up-to-date on the latest advances in generative AI and architectural best practices. Mandatory technical & functional skills The ideal candidate should have a strong background in working or developing agents using langgraph, autogen, and CrewAI. Proficiency in Python, with robust knowledge of machine learning libraries and frameworks such as TensorFlow, PyTorch, and Keras. Proven experience with cloud computing platforms (AWS, Azure, Google Cloud Platform) for building and deploying scalable AI solutions. Hands-on skills with containerization (Docker) and orchestration frameworks (Kubernetes), including related DevOps tools like Jenkins and GitLab CI/CD. Experience using Infrastructure as Code (IaC) tools such as Terraform or CloudFormation to automate cloud deployments. Proficient in SQL and NoSQL databases (e.g., PostgreSQL, MongoDB, Cassandra) to manage structured and unstructured data. Expertise in designing distributed systems, RESTful APIs, GraphQL integrations, and microservices architecture. - Knowledge of event-driven architectures and message brokers (e.g., RabbitMQ, Apache Kafka) to support robust inter-system communications. Preferred Technical & Functional Skills Experience with monitoring and logging tools (e.g., Prometheus, Grafana, ELK Stack) to ensure system reliability and operational performance. Familiarity with cutting-edge libraries such as Hugging Face Transformers, OpenAI’s API integrations, and other domain-specific tools. Large scale deployment of ML projects, with good understanding of DevOps /MLOps /LLM Ops Training and fine tuning of Large Language Models or SLMs (PALM2, GPT4, LLAMA etc ) Key behavioral attributes/requirements Ability to mentor junior developers Ability to own project deliverables and contribute towards risk mitigation Understand business objectives and functions to support data needs #KGS QUALIFICATIONS This role is for you if you have the below Educational Qualifications Bachelor’s/Master’s degree in Computer Science Certifications in Cloud technologies (AWS, Azure, GCP) and TOGAF certification (good to have) Work experience: 11 to 14 Years of Experience #KGS

Posted 3 weeks ago

Apply

2.0 years

0 Lacs

Delhi, India

Remote

Job Title: Lead Engineer – 4G/5G Packet Core & IMS Location: Delhi / Gurgaon Experience: 2 + Years Job Type: Full-Time | On-site (Delhi/Gurgaon) ________________________________________ Role Summary We are seeking skilled and versatile professionals to join our Services team for the deployment, configuration, testing, integration, and support of 4G/5G Packet Core and IMS product stacks. The role involves working in cloud-native environments and collaborating closely with product engineering and customer teams to deliver standards-compliant telecom solutions. ________________________________________ Key Responsibilities 1. Configuration & Deployment • Deploy and configure 4G EPC, 5G Core (NSA/SA), and IMS components in OpenStack, Kubernetes, or OpenShift environments. • Automate deployments using tools like Ansible, Helm, Terraform, and CI/CD pipelines. • Ensure deployments are secure, scalable, and resilient across various environments. 2. Testing & Validation • Develop and execute test plans for functional, performance, and interoperability testing. • Use telecom tools like Spirent for validation and KPI tracking. • Perform regression and acceptance testing in compliance with 3GPP standards. 3. Integration • Integrate network elements such as MME, SGW, PGW, AMF, SMF, UPF, HSS, PCRF, and CSCF. • Ensure seamless interworking of legacy and cloud-native network functions. 4. Release Management • Manage version control, release cycles, rollout strategies, and rollback plans. • Coordinate with cross-functional teams for successful releases. • Maintain detailed release documentation and logs. 5. Technical Documentation • Prepare and maintain: o High-Level and Low-Level Design documents o Method of Procedure (MoP) & SOPs o Deployment guides, technical manuals, and release notes • Ensure documentation is version-controlled and accessible. 6. Field Support & Troubleshooting • Provide on-site and remote deployment support. • Troubleshoot live network issues and conduct root cause analysis. • Ensure service continuity through close collaboration with internal and customer teams. 7. Collaboration with Product Development • Work with development teams to identify, reproduce, and resolve bugs. • Provide logs, diagnostics, and test scenarios to support debugging. • Participate in sprint planning, reviews, and feedback sessions. ________________________________________ Required Skills and Qualifications Must-Have • Bachelor’s/Master’s degree in Telecommunications, Computer Science, or related field. • Deep understanding of 4G/5G Packet Core and IMS architecture and protocols (GTP, SIP, Diameter, HTTP/2). • Experience with cloud-native platforms like OpenStack, Kubernetes, and OpenShift. • Strong scripting skills using Python, Bash, Ansible, and Helm. • Excellent communication skills and ability to create technical documentation (Confluence, Markdown, LaTeX, MS Word). • Strong troubleshooting and customer-facing abilities. ________________________________________ Unique Opportunity This role is part of a strategic initiative to build an indigenous telecom technology stack that delivers high-performance and standards-compliant LTE/5G core network solutions. It’s a great opportunity to be part of a cutting-edge team shaping India’s telecom future.

Posted 3 weeks ago

Apply

4.0 years

6 - 8 Lacs

India

On-site

We are a growing tech company focused on building scalable, secure, and cloud-native platforms. We're looking for a Senior Backend Developer with strong hands-on experience in Python , Node.js , and ERPNext/Frappe Framework to join our high-performing team. If you’re excited about architecting clean APIs, working with microservices, and deploying code in AWS environments—this is the role for you. Key Responsibilities: Backend Development & API Architecture Build and maintain backend services using Python (Django/Flask) and Node.js Develop clean, modular, and optimized code following best practices for performance and scalability Design, manage, and optimize MySQL databases with indexing and performance tuning Create and maintain RESTful APIs for web and mobile applications Integrate third-party services and internal APIs securely and efficiently Cloud Infrastructure & DevOps Deploy and monitor services on AWS (EC2, S3, RDS, Lambda, CloudWatch, etc.) Set up and manage CI/CD pipelines using GitHub Actions , Jenkins , or GitLab CI Implement logging, alerts, monitoring (Prometheus/Grafana), and rollback strategies System Architecture & Design Patterns Apply proven software design patterns , microservices architecture, and MVC principles Perform code reviews and mentor junior team members Troubleshoot bottlenecks and implement performance improvements Team Collaboration & Documentation Collaborate with frontend, QA, DevOps, and product teams Participate in Agile ceremonies like sprint planning, daily stand-ups, and retrospectives Maintain technical documentation for systems, flows, and APIs Required Skills & Qualifications: 4+ years of backend development experience using Python and Node.js Strong knowledge of ERPNext and the Frappe Framework Deep understanding of MySQL – query optimization, schema design, migrations Experience building secure, scalable REST APIs and authentication systems (OAuth2/JWT) Strong grasp of AWS Cloud Services Hands-on with CI/CD pipelines , version control ( Git ), and containerization (Docker is a plus) Solid understanding of software engineering principles, debugging, and testing practices Preferred (Good to Have): Experience working with microservices or event-driven architectures Familiarity with tools like Postman, Swagger, JIRA, Trello Exposure to Docker , Kubernetes , or serverless architecture Knowledge of Excel Macros , Power BI , or automation scripts is a bonus Job Type: Full-time Pay: ₹600,000.00 - ₹800,000.00 per year Location Type: In-person Schedule: Morning shift Experience: Back-end development: 4 years (Required) Python: 4 years (Required) Work Location: In person Speak with the employer +91 9870594683

Posted 3 weeks ago

Apply

3.0 years

16 - 20 Lacs

Ghaziabad, Uttar Pradesh, India

Remote

Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

3.0 years

16 - 20 Lacs

Noida, Uttar Pradesh, India

Remote

Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

3.0 years

16 - 20 Lacs

Agra, Uttar Pradesh, India

Remote

Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

4.0 - 10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

We are seeking a Senior/Lead DevOps Engineer – Databricks with strong experience in Azure Databricks to design, implement, and optimize Databricks infrastructure, CI/CD pipelines, and ML model deployment. The ideal candidate will be responsible for Databricks environment setup, networking, cluster management, access control, CI/CD automation, model deployment, asset bundle management, and monitoring. This role requires hands-on experience with DevOps best practices, infrastructure automation, and cloud-native architectures. Required Skills & Experience • 4 to 10 years of experience in DevOps with a strong focus on Azure Databricks. • Hands-on experience with Azure networking, VNET integration, and firewall rules. • Strong knowledge of Databricks cluster management, job scheduling, and optimization. • Expertise in CI/CD pipeline development for Databricks and ML models using Azure DevOps, Terraform, or GitHub Actions. • Experience with Databricks Asset Bundles (DAB) for packaging and deployment. • Proficiency in RBAC, Unity Catalog, and workspace access control. • Experience with Infrastructure as Code (IaC) tools like Terraform, ARM Templates, or Bicep. • Strong scripting skills in Python, Bash, or PowerShell. • Familiarity with monitoring tools (Azure Monitor, Prometheus, or Datadog). Preferred Qualifications • Databricks Certified Associate/Professional Administrator or equivalent certification. • Experience with AWS or GCP Databricks in addition to Azure. • Knowledge of Delta Live Tables (DLT), Databricks SQL, and MLflow. • Exposure to Kubernetes (AKS, EKS, or GKE) for ML model deployment. Roles & Responsibilities Key Responsibilities 1. Databricks Infrastructure Setup & Management • Configure and manage Azure Databricks workspaces, networking, and security. • Set up networking components like VNET integration, private endpoints, and firewall configurations. • Implement scalability strategies for efficient resource utilization. • Ensure high availability, resilience, and security of Databricks infrastructure. 2. Cluster & Capacity Management • Manage Databricks clusters, including autoscaling, instance selection, and performance tuning. • Optimize compute resources to minimize costs while maintaining performance. • Implement cluster policies and governance controls. 3. User & Access Management • Implement RBAC (Role-Based Access Control) and IAM (Identity and Access Management) for users and services. • Manage Databricks Unity Catalog and enforce workspace-level access controls. • Define and enforce security policies across Databricks workspaces. 4. CI/CD Automation for Databricks & ML Models • Develop and manage CI/CD pipelines for Databricks Notebooks, Jobs, and ML models using Azure DevOps, GitHub Actions, or Jenkins. • Automate Databricks infrastructure deployment using Terraform, ARM Templates, or Bicep. • Implement automated testing, version control, and rollback strategies for Databricks workloads. • Integrate Databricks Asset Bundles (DAB) for standardized and repeatable Databricks deployments. 5. Databricks Asset Bundle Management • Implement Databricks Asset Bundles (DAB) to package, version, and deploy Databricks workflows efficiently. • Automate workspace configuration, job definitions, and dependencies using DAB. • Ensure traceability, rollback, and version control of deployed assets. • Integrate DAB with CI/CD pipelines for seamless deployment. 6. ML Model Deployment & Monitoring • Deploy ML models using Databricks MLflow, Azure Machine Learning, or Kubernetes (AKS). • Optimize model performance and enable real-time inference. • Implement model monitoring, drift detection, and automated retraining pipelines. 7. Monitoring, Troubleshooting & Performance Optimization • Set up Databricks monitoring and logging using Azure Monitor, Datadog, or Prometheus. • Analyze cluster performance metrics, audit logs, and cost insights to optimize workloads. • Troubleshoot Databricks infrastructure, pipelines, and deployment issues.

Posted 3 weeks ago

Apply

4.0 - 10.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

We are seeking a Senior/Lead DevOps Engineer – Databricks with strong experience in Azure Databricks to design, implement, and optimize Databricks infrastructure, CI/CD pipelines, and ML model deployment. The ideal candidate will be responsible for Databricks environment setup, networking, cluster management, access control, CI/CD automation, model deployment, asset bundle management, and monitoring. This role requires hands-on experience with DevOps best practices, infrastructure automation, and cloud-native architectures. Required Skills & Experience • 4 to 10 years of experience in DevOps with a strong focus on Azure Databricks. • Hands-on experience with Azure networking, VNET integration, and firewall rules. • Strong knowledge of Databricks cluster management, job scheduling, and optimization. • Expertise in CI/CD pipeline development for Databricks and ML models using Azure DevOps, Terraform, or GitHub Actions. • Experience with Databricks Asset Bundles (DAB) for packaging and deployment. • Proficiency in RBAC, Unity Catalog, and workspace access control. • Experience with Infrastructure as Code (IaC) tools like Terraform, ARM Templates, or Bicep. • Strong scripting skills in Python, Bash, or PowerShell. • Familiarity with monitoring tools (Azure Monitor, Prometheus, or Datadog). Preferred Qualifications • Databricks Certified Associate/Professional Administrator or equivalent certification. • Experience with AWS or GCP Databricks in addition to Azure. • Knowledge of Delta Live Tables (DLT), Databricks SQL, and MLflow. • Exposure to Kubernetes (AKS, EKS, or GKE) for ML model deployment. Roles & Responsibilities Key Responsibilities 1. Databricks Infrastructure Setup & Management • Configure and manage Azure Databricks workspaces, networking, and security. • Set up networking components like VNET integration, private endpoints, and firewall configurations. • Implement scalability strategies for efficient resource utilization. • Ensure high availability, resilience, and security of Databricks infrastructure. 2. Cluster & Capacity Management • Manage Databricks clusters, including autoscaling, instance selection, and performance tuning. • Optimize compute resources to minimize costs while maintaining performance. • Implement cluster policies and governance controls. 3. User & Access Management • Implement RBAC (Role-Based Access Control) and IAM (Identity and Access Management) for users and services. • Manage Databricks Unity Catalog and enforce workspace-level access controls. • Define and enforce security policies across Databricks workspaces. 4. CI/CD Automation for Databricks & ML Models • Develop and manage CI/CD pipelines for Databricks Notebooks, Jobs, and ML models using Azure DevOps, GitHub Actions, or Jenkins. • Automate Databricks infrastructure deployment using Terraform, ARM Templates, or Bicep. • Implement automated testing, version control, and rollback strategies for Databricks workloads. • Integrate Databricks Asset Bundles (DAB) for standardized and repeatable Databricks deployments. 5. Databricks Asset Bundle Management • Implement Databricks Asset Bundles (DAB) to package, version, and deploy Databricks workflows efficiently. • Automate workspace configuration, job definitions, and dependencies using DAB. • Ensure traceability, rollback, and version control of deployed assets. • Integrate DAB with CI/CD pipelines for seamless deployment. 6. ML Model Deployment & Monitoring • Deploy ML models using Databricks MLflow, Azure Machine Learning, or Kubernetes (AKS). • Optimize model performance and enable real-time inference. • Implement model monitoring, drift detection, and automated retraining pipelines. 7. Monitoring, Troubleshooting & Performance Optimization • Set up Databricks monitoring and logging using Azure Monitor, Datadog, or Prometheus. • Analyze cluster performance metrics, audit logs, and cost insights to optimize workloads. • Troubleshoot Databricks infrastructure, pipelines, and deployment issues.

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

As a trusted global transformation partner, Welocalize accelerates the global business journey by enabling brands and companies to reach, engage, and grow international audiences. Welocalize delivers multilingual content transformation services in translation, localization, and adaptation for over 250 languages with a growing network of over 400,000 in-country linguistic resources. Driving innovation in language services, Welocalize delivers high-quality training data transformation solutions for NLP-enabled machine learning by blending technology and human intelligence to collect, annotate, and evaluate all content types. Our team works across locations in North America, Europe, and Asia serving our global clients in the markets that matter to them. www.welocalize.com To perform this job successfully, an individual must be able to perform each essential duty satisfactorily. The requirements listed below are representative of the knowledge, skill, and/or ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions. Job Reference: Role Summary The senior AI/ML R&D Engineer is responsible for leading the design, implementation, and optimization of scalable machine learning infrastructure. This role ensures that AI/ML models are efficiently deployed, managed, and monitored in production environments while providing mentorship and technical leadership to junior engineers. Key Responsibilities: Architectural Leadership: Lead the design and development of scalable, secure, and efficient AI/ML platform architecture, ensuring robust and reliable infrastructure. Automation & Deployment: Develop and implement advanced automation pipelines for model deployment, monitoring, and rollback, enhancing operational efficiency. Cross-Functional Collaboration: Collaborate with cross-functional teams, including data scientists and product managers, to define platform requirements and support seamless model integration. Performance Optimization: Drive performance tuning, load balancing, and cost optimization strategies to ensure the platform's efficiency and scalability. Mentorship & Leadership: Mentor junior platform engineers, providing technical guidance and fostering a culture of best practices and continuous learning. Incident Management: Conduct post-mortems and root cause analysis for system failures and performance issues, implementing corrective actions to prevent recurrence. Qualifications Educational Background: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Experience: 5+ years of experience in Python & Node.js engineering, with a proven track record in leading and executing complex projects. Technical Expertise: · Expertise in cloud-based solutions (e.g., AWS, GCP, Azure), distributed systems, and microservices architecture. · Proficiency in Terraform, Docker, and advanced automation tools. · Proficiency in python and node.js. · Strong understanding of machine learning frameworks (e.g., TensorFlow, PyTorch) and MLOps practices. Problem-Solving Skills: Excellent problem-solving skills with a proactive approach to identifying and addressing technical challenges. Leadership Skills: Strong leadership and mentoring skills, with the ability to guide and inspire engineering teams. Communication Skills: Exceptional communication skills, with the ability to articulate technical concepts to both technical and non-technical stakeholders.

Posted 3 weeks ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Greetings From Teamware Solutions, We are Hiring " Data Analytics III" for one of our client. Experience: 5+ yrs Location: Bangalore Notice Period: Immediate Joiners/ 15 days JD: Duties Core Responsibilities 1. Issue Resolution & Bug Fixing - Power BI: Monitor dashboards and reports for data discrepancies, performance issues, and broken visuals. Troubleshoot DAX queries and data refresh failures. - Power Apps: Address UI/UX bugs, manage version control, and resolve integration issues with SharePoint, SQL, or Power Automate. - Power Automate: Debug failed flows, manage throttling issues, and ensure error handling and retry logic is in place. - SQL: Investigate slow-running queries, data integrity issues, and ETL failures. Maintain audit logs and triggers judiciously. - Snap Logic: Monitor pipeline health, resolve transformation errors, and ensure data flow continuity across systems like SAP, and SFDC. - UiPath (optional): Handle bot failures, maintain Orchestrator logs, and ensure bots are optimized for exception handling and performance. 2. Enhancement Implementation - Collaborate with business users to gather enhancement requirements. - Prototype and deploy enhancements using low-code/no-code tools (Power Platform) or scripting (SQL, Snap Logic). - Ensure enhancements are modular, reusable, and well-documented. Skills 3. Monitoring & Reporting - Maintain dashboards to track ticket status, TAT, and bug/enhancement trends. - Provide regular updates to stakeholders on issue resolution progress and enhancement rollouts. 4. Governance & Compliance - Follow formal change management processes for all production changes. - Maintain detailed logs and documentation for audit and rollback purposes. - Implement role-based access controls (RBAC) in Power Apps and other tools. 5. Collaboration & Communication - Act as the liaison between business users, developers, and platform owners. - Participate in daily stand-ups or triage meetings to prioritize and assign tickets. - Coordinate with cross-functional teams during critical incidents Interested Candidates can share their resume to sujana.s@twsol.com

Posted 3 weeks ago

Apply

4.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

We are a growing tech company focused on building scalable, secure, and cloud-native platforms. We're looking for a Senior Backend Developer with strong hands-on experience in Python , Node.js , and ERPNext/Frappe Framework to join our high-performing team. If you’re excited about architecting clean APIs, working with microservices, and deploying code in AWS environments—this is the role for you. Key Responsibilities: Backend Development & API Architecture Build and maintain backend services using Python (Django/Flask) and Node.js Develop clean, modular, and optimized code following best practices for performance and scalability Design, manage, and optimize MySQL databases with indexing and performance tuning Create and maintain RESTful APIs for web and mobile applications Integrate third-party services and internal APIs securely and efficiently Cloud Infrastructure & DevOps Deploy and monitor services on AWS (EC2, S3, RDS, Lambda, CloudWatch, etc.) Set up and manage CI/CD pipelines using GitHub Actions , Jenkins , or GitLab CI Implement logging, alerts, monitoring (Prometheus/Grafana), and rollback strategies System Architecture & Design Patterns Apply proven software design patterns , microservices architecture, and MVC principles Perform code reviews and mentor junior team members Troubleshoot bottlenecks and implement performance improvements Team Collaboration & Documentation Collaborate with frontend, QA, DevOps, and product teams Participate in Agile ceremonies like sprint planning, daily stand-ups, and retrospectives Maintain technical documentation for systems, flows, and APIs Required Skills & Qualifications: 4+ years of backend development experience using Python and Node.js Strong knowledge of ERPNext and the Frappe Framework Deep understanding of MySQL – query optimization, schema design, migrations Experience building secure, scalable REST APIs and authentication systems (OAuth2/JWT) Strong grasp of AWS Cloud Services Hands-on with CI/CD pipelines , version control ( Git ), and containerization (Docker is a plus) Solid understanding of software engineering principles, debugging, and testing practices Preferred (Good to Have): Experience working with microservices or event-driven architectures Familiarity with tools like Postman, Swagger, JIRA, Trello Exposure to Docker , Kubernetes , or serverless architecture Knowledge of Excel Macros , Power BI , or automation scripts is a bonus

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

As a Fullstack SDE - II at NxtWave, you Build applications at a scale and see them released quickly to the NxtWave learners (within weeks )Get to take ownership of the features you build and work closely with the product tea mWork in a great culture that continuously empowers you to grow in your caree rEnjoy freedom to experiment & learn from mistakes (Fail Fast, Learn Faster )NxtWave is one of the fastest growing edtech startups. Get first-hand experience in scaling the features you build as the company grows rapidl yBuild in a world-class developer environment by applying clean coding principles, code architecture, etc .Responsibilitie sLead design and delivery of complex end-to-end features across frontend, backend, and data layers .Make strategic architectural decisions on frameworks, datastores, and performance patterns .Review and approve pull requests, enforcing clean-code guidelines, SOLID principles, and design patterns .Build and maintain shared UI component libraries and backend service frameworks for team reuse .Identify and eliminate performance bottlenecks in both browser rendering and server throughput .Instrument services with metrics and logging, driving SLIs, SLAs, and observability .Define and enforce comprehensive testing strategies: unit, integration, and end-to-end .Own CI/CD pipelines, automating builds, deployments, and rollback procedures .Ensure OWASP Top-10 mitigations, WCAG accessibility, and SEO best practices .Partner with Product, UX, and Ops to translate business objectives into technical roadmaps .Facilitate sprint planning, estimation, and retrospectives for predictable deliveries .Mentor and guide SDE-1s and interns; participate in hiring .Qualifications & Skill s3–5 years building production Full stack applications end-to-end with measurable impact .Proven leadership in Agile/Scrum environments with a passion for continuous learning .Deep expertise in React (or Angular/Vue) with TypeScript and modern CSS methodologies .Proficient in Node.js (Express/NestJS) or Python (Django/Flask/FastAPI) or Java (Spring Boot) .Expert in designing RESTful and GraphQL APIs and scalable database schemas .Knowledge of MySQL/PostgreSQL indexing, NoSQL (ElasticSearch/DynamoDB), and caching (Redis) .Knowledge of Containerization (Docker) and commonly used AWS services such as lambda, ec2, s3, api gateway etc .Skilled in unit/integration (Jest, pytest) and E2E testing (Cypress, Playwright) .Frontend profiling (Lighthouse) and backend tracing for performance tuning .Secure coding: OAuth2/JWT, XSS/CSRF protection, and familiarity with compliance regimes .Strong communicator able to convey technical trade-offs to non-technical stakeholders .Experience in reviewing pull requests and providing constructive feedback to the team .Qualities we'd love to find in you : The attitude to always strive for the best outcomes and an enthusiasm to deliver high quality softwa reStrong collaboration abilities and a flexible & friendly approach to working with tea msStrong determination with a constant eye on solutio nsCreative ideas with problem solving mind-s etBe open to receiving objective criticism and improving upon itEagerness to learn and zeal to gr owStrong communication skills is a huge pl usWork Location : Hyderab ad About Nxt WaveNxtWave is one of India’s fastest-growing ed-tech startups, revolutionizing the 21st-century job market. NxtWave is transforming youth into highly skilled tech professionals through its CCBP 4.0 programs, regardless of their educational backgro und.NxtWave is founded by Rahul Attuluri (Ex Amazon, IIIT Hyderabad), Sashank Reddy (IIT Bombay) and Anupam Pedarla (IIT Kharagpur). Supported by Orios Ventures, Better Capital, and Marquee Angels, NxtWave raised $33 million in 2023 from Greater Pacific Capi tal.As an official partner for NSDC (under the Ministry of Skill Development & Entrepreneurship, Govt. of India) and recognized by NASSCOM, NxtWave has earned a reputation for excelle nce.Some of its prestigious recognitions incl ude:Technology Pioneer 2024 by the World Economic Forum, one of only 100 startups chosen glob ally‘Startup Spotlight Award of the Year’ by T-Hub in 2023‘Best Tech Skilling EdTech Startup of the Year 2022’ by Times Business Aw ards‘The Greatest Brand in Education’ in a research-based listing by URS M ediaNxtWave Founders Anupam Pedarla and Sashank Gujjula were honoured in the 2024 Forbes India 30 Under 30 for their contributions to tech educa tionNxtWave breaks learning barriers by offering vernacular content for better comprehension and retention. NxtWave now has paid subscribers from 650+ districts across India. Its learners are hired by over 2000+ companies including Amazon, Accenture, IBM, Bank of America, TCS, Deloitte and m ore. Know more about NxtW ave: https://www.cc bp.inRead more about us in the ne ws – Economic Times | CNBC | YourStory | VCC ircle

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

As a Fullstack SDE - II at NxtWave, you Build applications at a scale and see them released quickly to the NxtWave learners (within weeks) Get to take ownership of the features you build and work closely with the product team Work in a great culture that continuously empowers you to grow in your career Enjoy freedom to experiment & learn from mistakes (Fail Fast, Learn Faster) NxtWave is one of the fastest growing edtech startups. Get first-hand experience in scaling the features you build as the company grows rapidly Build in a world-class developer environment by applying clean coding principles, code architecture, etc. Responsibilities Lead design and delivery of complex end-to-end features across frontend, backend, and data layers. Make strategic architectural decisions on frameworks, datastores, and performance patterns. Review and approve pull requests, enforcing clean-code guidelines, SOLID principles, and design patterns. Build and maintain shared UI component libraries and backend service frameworks for team reuse. Identify and eliminate performance bottlenecks in both browser rendering and server throughput. Instrument services with metrics and logging, driving SLIs, SLAs, and observability. Define and enforce comprehensive testing strategies: unit, integration, and end-to-end. Own CI/CD pipelines, automating builds, deployments, and rollback procedures. Ensure OWASP Top-10 mitigations, WCAG accessibility, and SEO best practices. Partner with Product, UX, and Ops to translate business objectives into technical roadmaps. Facilitate sprint planning, estimation, and retrospectives for predictable deliveries. Mentor and guide SDE-1s and interns; participate in hiring. Qualifications & Skills 3–5 years building production Full stack applications end-to-end with measurable impact. Proven leadership in Agile/Scrum environments with a passion for continuous learning. Deep expertise in React (or Angular/Vue) with TypeScript and modern CSS methodologies. Proficient in Node.js (Express/NestJS) or Python (Django/Flask/FastAPI) or Java (Spring Boot). Expert in designing RESTful and GraphQL APIs and scalable database schemas. Knowledge of MySQL/PostgreSQL indexing, NoSQL (ElasticSearch/DynamoDB), and caching (Redis). Knowledge of Containerization (Docker) and commonly used AWS services such as lambda, ec2, s3, api gateway etc. Skilled in unit/integration (Jest, pytest) and E2E testing (Cypress, Playwright). Frontend profiling (Lighthouse) and backend tracing for performance tuning. Secure coding: OAuth2/JWT, XSS/CSRF protection, and familiarity with compliance regimes. Strong communicator able to convey technical trade-offs to non-technical stakeholders. Experience in reviewing pull requests and providing constructive feedback to the team. Qualities we'd love to find in you: The attitude to always strive for the best outcomes and an enthusiasm to deliver high quality software Strong collaboration abilities and a flexible & friendly approach to working with teams Strong determination with a constant eye on solutions Creative ideas with problem solving mind-set Be open to receiving objective criticism and improving upon it Eagerness to learn and zeal to grow Strong communication skills is a huge plus Work Location: Hyderabad About NxtWave NxtWave is one of India’s fastest-growing ed-tech startups, revolutionizing the 21st-century job market. NxtWave is transforming youth into highly skilled tech professionals through its CCBP 4.0 programs, regardless of their educational background. NxtWave is founded by Rahul Attuluri (Ex Amazon, IIIT Hyderabad), Sashank Reddy (IIT Bombay) and Anupam Pedarla (IIT Kharagpur). Supported by Orios Ventures, Better Capital, and Marquee Angels, NxtWave raised $33 million in 2023 from Greater Pacific Capital. As an official partner for NSDC (under the Ministry of Skill Development & Entrepreneurship, Govt. of India) and recognized by NASSCOM, NxtWave has earned a reputation for excellence. Some of its prestigious recognitions include: Technology Pioneer 2024 by the World Economic Forum, one of only 100 startups chosen globally ‘Startup Spotlight Award of the Year’ by T-Hub in 2023 ‘Best Tech Skilling EdTech Startup of the Year 2022’ by Times Business Awards ‘The Greatest Brand in Education’ in a research-based listing by URS Media NxtWave Founders Anupam Pedarla and Sashank Gujjula were honoured in the 2024 Forbes India 30 Under 30 for their contributions to tech education NxtWave breaks learning barriers by offering vernacular content for better comprehension and retention. NxtWave now has paid subscribers from 650+ districts across India. Its learners are hired by over 2000+ companies including Amazon, Accenture, IBM, Bank of America, TCS, Deloitte and more. Know more about NxtWave: https://www.ccbp.in Read more about us in the news – Economic Times | CNBC | YourStory | VCCircle

Posted 3 weeks ago

Apply

3.0 - 5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Essential Functions And Responsibilities Monitor the health of the DB servers through both automated and manual processes Recommend and implement solutions for performance monitoring and tuning. Analyze problems, anticipate future problem areas, and implement solutions. Perform environment setup and configuration, proactive monitoring and maintenance. Support the development of long- and short-term requirements for database administration and design. Participate in your change control process for all planned application and technical activities. Work with report writers to provide data needed for reports. Take on full roadmap items, work with other functional teams, and be able to deliver high quality results on time. Investigate and find the root cause for software problems reported by stakeholders Direct organization of requirements and data into a usable database schema by directing development of ad hoc queries, scripts, updates to existing queries. May perform database administration and maintenance, including database installation and configuration, backups, upgrades, patching. Review SQL code written by application developers to ensure compliance to coding standards and best practices as well as maximum performance. Evaluate performance of stored procedures and find time/resource consuming areas and give inputs to application developers to fix them Create deployment and rollback scripts for all database objects manually or auto generated. Analyze access patterns and propose the best combination of indexes, constraints, foreign keys, and queries. Proactively identifies technical opportunities and enhancements while addressing major incidents in a timely manner Manages and ensures the integrity, security, and retention of data Administers and maintains end user accounts, permissions, and access rights Participate in an on-call rotation providing 24-hour, 7-day support, and off-hours maintenance windows Qualifications Education and background: Bachelor's degree in computer science or related discipline is required; experience may substitute for the education requirement 3-5 years' experience with Relational Databases e.g. MariaDB/Mysql/NoSQL and above required 3-5 years' experience with automation utilizing Shell scripting (Shell, Perl, python, etc.) required

Posted 3 weeks ago

Apply

0.0 - 4.0 years

6 - 8 Lacs

Noida Sector 62, Noida, Uttar Pradesh

On-site

We are a growing tech company focused on building scalable, secure, and cloud-native platforms. We're looking for a Senior Backend Developer with strong hands-on experience in Python , Node.js , and ERPNext/Frappe Framework to join our high-performing team. If you’re excited about architecting clean APIs, working with microservices, and deploying code in AWS environments—this is the role for you. Key Responsibilities: Backend Development & API Architecture Build and maintain backend services using Python (Django/Flask) and Node.js Develop clean, modular, and optimized code following best practices for performance and scalability Design, manage, and optimize MySQL databases with indexing and performance tuning Create and maintain RESTful APIs for web and mobile applications Integrate third-party services and internal APIs securely and efficiently Cloud Infrastructure & DevOps Deploy and monitor services on AWS (EC2, S3, RDS, Lambda, CloudWatch, etc.) Set up and manage CI/CD pipelines using GitHub Actions , Jenkins , or GitLab CI Implement logging, alerts, monitoring (Prometheus/Grafana), and rollback strategies System Architecture & Design Patterns Apply proven software design patterns , microservices architecture, and MVC principles Perform code reviews and mentor junior team members Troubleshoot bottlenecks and implement performance improvements Team Collaboration & Documentation Collaborate with frontend, QA, DevOps, and product teams Participate in Agile ceremonies like sprint planning, daily stand-ups, and retrospectives Maintain technical documentation for systems, flows, and APIs Required Skills & Qualifications: 4+ years of backend development experience using Python and Node.js Strong knowledge of ERPNext and the Frappe Framework Deep understanding of MySQL – query optimization, schema design, migrations Experience building secure, scalable REST APIs and authentication systems (OAuth2/JWT) Strong grasp of AWS Cloud Services Hands-on with CI/CD pipelines , version control ( Git ), and containerization (Docker is a plus) Solid understanding of software engineering principles, debugging, and testing practices Preferred (Good to Have): Experience working with microservices or event-driven architectures Familiarity with tools like Postman, Swagger, JIRA, Trello Exposure to Docker , Kubernetes , or serverless architecture Knowledge of Excel Macros , Power BI , or automation scripts is a bonus Job Type: Full-time Pay: ₹600,000.00 - ₹800,000.00 per year Location Type: In-person Schedule: Morning shift Experience: Back-end development: 4 years (Required) Python: 4 years (Required) Work Location: In person Speak with the employer +91 9870594683

Posted 3 weeks ago

Apply

7.0 years

0 Lacs

Delhi, India

On-site

Position Summary: We are looking for a Senior Database Developer with strong experience in migrating IBM DB2 databases to AWS RDS Aurora PostgreSQL . If you're passionate about cloud modernization and database engineering, and you're ready to lead high-impact migration projects, we want to hear from you! Key Responsibilities: Lead end-to-end migration of on-premise DB2 databases to AWS RDS Aurora PostgreSQL Design, implement, and optimize migration strategies and database schemas Collaborate with cross-functional teams (Cloud Architects, Developers, DevOps) Build and manage automation scripts/tools for data transformation and validation Assess current database structures and align them with AWS & PostgreSQL best practices Monitor and optimize post-migration performance Troubleshoot migration issues and ensure data integrity Maintain security, compliance, and governance throughout the project lifecycle Produce comprehensive documentation (migration plans, data maps, rollback strategies) Mentor junior database engineers Required Qualifications: Bachelor’s or Master’s degree in Computer Science or related field 7+ years of experience in database development/administration Deep knowledge of IBM DB2 and its architecture Proven hands-on experience migrating DB2 to AWS RDS Aurora PostgreSQL Strong PostgreSQL skills: stored procedures, indexing, performance tuning Proficiency in ETL tools , scripting (Python, Shell), and automation Hands-on with AWS DMS , Schema Conversion Tool , RDS , Aurora Strong understanding of data modeling and optimization techniques Excellent analytical and communication skills Preferred Qualifications: AWS Certification (e.g., AWS Certified Database – Specialty or Solutions Architect) Experience with CI/CD , Infrastructure-as-Code (Terraform, CloudFormation) Familiarity with DevOps tools (Git, Jenkins, Ansible) Why Join Us? Work with a global leader in cloud and software solutions Be part of cutting-edge migration projects across industries Join a culture that celebrates collaboration, innovation, and growth Competitive compensation and career development opportunities Open to candidates who are willing to relocate to Malaysia.

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies