Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 8.0 years
0 Lacs
New Delhi, Delhi, India
On-site
Position : Java Developer / Contractual Role Location : Gurugram , Haryana Contact : nikhil.vats@multiversetech.com Experience : 5-8 Years Key Responsibilities: Design, develop, and maintain scalable and secure Java applications using Spring Boot and related technologies. Develop RESTful APIs and integrate them with frontend and third-party services. Work on microservices architecture and participate in service orchestration and containerization using Docker. Perform code reviews, troubleshoot production issues, and contribute to performance tuning. Collaborate with cross-functional teams including QA, DevOps, and Business Analysts to deliver quality software on time. Write and maintain technical documentation and unit test cases. Contribute to the continuous improvement of development processes and tools. Required Skills: Strong expertise in Core Java , OOPs , and Multithreading Hands-on experience with Spring Boot , Spring MVC , Spring Security , Hibernate/JPA Good knowledge of RESTful Web Services and Microservices Experience with SQL and RDBMS like MySQL, PostgreSQL or Oracle Familiarity with Git , Maven/Gradle , and CI/CD tools like Jenkins Knowledge of Docker and container-based deployments Exposure to Cloud Platforms (AWS, Azure, or GCP) is a plus
Posted 4 days ago
4.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
About Media.net : Media.net is a leading, global ad tech company that focuses on creating the most transparent and efficient path for advertiser budgets to become publisher revenue. Our proprietary contextual technology is at the forefront of enhancing Programmatic buying, the latest industry standard in ad buying for digital platforms. The Media.net platform powers major global publishers and ad-tech businesses at scale across ad formats like display, video, mobile, native, as well as search. Media.net’s U.S. HQ is based in New York, and the Global HQ is in Dubai. With office locations and consultant partners across the world, Media.net takes pride in the value-add it offers to its 50+ demand and 21K+ publisher partners, in terms of both products and services. Responsibilities (What You’ll Do) Infrastructure Management: Oversee and maintain the infrastructure that supports the ad exchange applications. This includes load balancers, data stores, CI/CD pipelines, and monitoring stacks. Continuously improve infrastructure resilience, scalability, and efficiency to meet the demands of massive request volume and stringent latency requirements. Developing policies and procedures that improve overall platform stability and participate in shared On-call schedule Collaboration with Developers: Work closely with developers to establish and uphold quality and performance benchmarks, ensuring that applications meet necessary criteria before they are deployed to production. Participate in design reviews and provide feedback on infrastructure-related aspects to improve system performance and reliability. Building Tools for Infra Management: Develop tools to simplify and enhance infrastructure management, automate processes, and improve operational efficiency. These tools may address areas such as monitoring, alerting, deployment automation, and failure detection and recovery, which are critical in minimizing latency and maintaining uptime. Performance Optimization: Focus on reducing latency and maximizing efficiency across all components, from request handling in load balancers to database optimization. Implement best practices and tools for performance monitoring, including real-time analysis and response mechanisms. Who Should Apply B.Tech/M.Tech or equivalent in Computer Science, Information Technology, or a related field. 2–4 years of experience managing services in large-scale distributed systems. Strong understanding of networking concepts (e.g., TCP/IP, routing, SDN) and modern software architectures. Proficiency in programming and scripting languages such as Python, Go, or Ruby, with a focus on automation. Experience with container orchestration tools like Kubernetes and virtualization platforms (preferably GCP). Ability to independently own problem statements, manage priorities, and drive solutions. Preferred Skills & Tools Expertise: Infrastructure as Code: Experience with Terraform. Configuration management tools like Nix, Ansible. Monitoring and Logging Tools: Expertise with Prometheus, Grafana, or ELK stack. OLAP databases : Clickhouse and Apache druid. CI/CD Pipelines: Hands-on experience with Jenkins, or ArgoCD. Databases: Proficiency in MySQL (relational) or Redis (NoSQL). Load Balancers Servers: Familiarity with haproxy or Nginx. Strong knowledge of operating systems and networking fundamentals. Experience with version control systems such as Git.
Posted 4 days ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Summary: We are looking for a highly skilled Senior .NET Developer with extensive experience in backend development using .NET Core and proficiency in frontend technologies like React.js or Angular. The ideal candidate should have a strong understanding of SQL databases and exposure to cloud platforms such as AWS or Azure. This role offers an exciting opportunity to work on scalable and high-performing applications in a dynamic and collaborative environment. HR Documentation Public Access Key Responsibilities: • Backend Development: • Design, develop, and maintain backend applications using .NET Core. • Implement robust APIs and microservices architectures to support scalable solutions. • Optimize application performance, security, and reliability. • Frontend Development: • Work with React.js (preferred) or Angular to develop responsive and user-friendly interfaces. • Collaborate with UX/UI designers to ensure seamless user experiences. • Database Management: • Design and maintain efficient database schemas using SQL (Any SQL database). • Write optimized queries and ensure data integrity and security. • Cloud & DevOps: • Utilize AWS or Azure cloud services for deployment, monitoring, and scalability. • Work with containerization tools like Docker and orchestration tools like Kubernetes (if applicable). • Collaboration & Agile Development: • Work closely with cross-functional teams, including product managers, designers, and other developers. • Follow Agile/Scrum methodologies for project management and timely delivery. • Participate in code reviews and mentor junior developers. Required Skills & Qualifications: • 7+ years of experience in software development with a focus on .NET Core. • Hands-on experience with React.js or Angular for frontend development. • Strong knowledge of SQL databases (MySQL, PostgreSQL, SQL Server, etc.). • Experience with AWS or Azure cloud environments. • Solid understanding of microservices architecture, RESTful APIs, and system design. • Experience with DevOps practices and CI/CD pipelines is a plus. • Excellent problem-solving skills and ability to work in a fast-paced environment. • Strong communication and teamwork skills
Posted 4 days ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job title: Senior Analyst - CRM Country Support Job location: Hyderabad About The Job Strategic context: Sanofi has currently the best and most robust pipeline of R&D and consequent new launches of our history. As new phase of Play-To-Win strategy, funding this pipeline and new launches is key to materialize the miracles of the science to improve people lives. Thus, as we enter the next phase, modernization of Sanofi is required as per the recent announcements on DRIVE, and in this respect, we are in the beginning stages of organizing the Go-to-Market Capabilities (GTMC) team at global level. The GTMC organization will help us to drive best-in-class capabilities across the board and bring value and excellence in our commercial operations. This move is a key part of the aimed modernization of Sanofi and will allow us to focus on our priorities across our products, market and pipeline through the reallocation of resources and realizing the efficiencies of removing silos that exist between our business units, avoiding the duplication and overlapping of resources, standardizing our processes and tools, operating with a One Sanofi approach to accelerate our key capabilities development, and fostering the entrepreneurial spirit by speeding up the decision making. Customer Facing CRM & Platforms Team Aims For Centralize Go-to-Market excellence and operational tasks across Global Business Units (GBUs), Standardize best-in-class capabilities with strengthened global support while verticalization of reporting within GTMC from local to global, Define clear ways of working and bringing clarity on interfaces with GBUs, Digital, and executional support on commercial operations from Sanofi hubs to optimize process excellence and efficiency. Main Responsibilities Create and maintain surveys/coaching forms and dynamic attributes, including data loads and ongoing maintenance. Set up and maintain Global Core Ratings, create templates and load into OneCRM, do cross check; troubleshooting any issues. Create for end users platform/system alerts, ensuring timely notifications of an start and end period. Setting up and loading TOT template (Time off territory) for end users. Manage and handle troubleshooting on behalf of end users regarding on country-specific needs. Create Service Requests to AIMS, check execution of work done by AIMS Deploy and manage both standard and new modules securing country readiness. Data stewardship; raise ticket, reverify data after correction (OneCRM/OneCI) Provide automatic translation releases, training materials and fields in the system Execute country specific test scripts for UAT (User Acceptance Testing) Veeva Align OCCP, incl feedback module and Veeva Align Territory Administration Ensure on time, continuous seamless OCCP (OmniChannel Call Plan) orchestration and deployment including feedback and Territory administration in the Veeva Align modules for all GBU’s (GenMed, Vaccines and Speciality Care) Support to One CRM countries. Veeva Align including OCCP feedback, tasks like preparation of files uploads, tagging and reporting of all activities related to Veeva Align Veeva Align territory administration. Tasks like field force creation and changes, territory creation and changes, product creation, account rules, explicit assignment deletion etc. will be weekly activities. Monitor the usage of OneCRM including newly released features. Load data and ensure data consistency in the module (new contract templates, invitations, mass upload, some profile) Content: Ensure on time delivery, management, upload, tagging and reporting of all digital assets and content ordered and approved for distribution through the major content management systems (Veeva Vault, 1CRM, Veeva 4MPromo Mat, DAM -Digital Asset Management & other CMS Tools, etc) by collaborating with colleagues from the medical, marketing, compliance, IT and local affiliates as well external agencies, photo studios and other creative sources such as stock libraries. Ensure that content is received properly with all added supporting information - key words, focus areas, categories, grouping as well as other data that should be available within the Content for conversion and upload on the system. Demonstrable expertise in complex Veeva CLM development and deployment with teams and managing stakeholder interaction. Serve as the Project originator for routing completed Veeva CLMs through the testing process before handoff to the global, regional, or local teams in a highly regulated environment. Responsible for quality control and technical viability of assets to be uploaded. Ensure that the tagging and metadata of content is consistent and appropriately applied to all assets for the region and functions. Build/ Develop Veeva CLM via content provided from teams in a Veeva CLM creation platform in alignment with instructions provided. Partner closely with Medical Teams to ensure the most up to date and efficient search capabilities are applied and used in the most competent way. Analyse metadata, subtypes, search fields, and security policies, and identify inefficiencies and consider new solutions to ensure the digital content are being utilized at their highest potential. Create, update, and distribute all necessary digital asset guidelines to ensure that all current processes are followed and kept relevant. Responsible for testing the content within platform on performance, content format and on interactive elements (hotspots, links, etc) Receive QC approval and then distribute content to appropriate user group for UAT. Providing training sessions to MSLs or various countries on Veeva Vault application Mentor and train 1CRM digital asset specialists and create/update all training guidelines and materials as needed. Build and maintain intranet and internet websites using platforms such as SharePoint. People: (1) Maintain effective relationships with stakeholders;(2) Liaise and coordinate with colleagues in medical function to receive content for dissemination through one CRM (3) Co-ordinating and performing QC activities to ensure quality check validation and UAT acceptance Performance: (1) Manage receipt of Content including content Approval documentation as per set quality standards;(2) Perform initial QC on content to test rendering, performance and interactive;(3) Perform trouble shooting content-related technical issues;(4) Timely distribute content to appropriate QC user group (5) Enhance content structure and digital asset management learnings;(6) Build and maintain intranet and internet websites Process: (1) Follow detailed guidelines (for example checking metadata which have links to pdf review of the content for assessment, format, expiration date, tagging, validating MMRC#); (2) Secure adherence to QC process to maintain quality requirements About You Work Experience: 5+ years of experience in Database administration, Experience with expertise with Power BI and Snowflake, Data Quality Commercial Operations knowledge and desirable experience supporting in-field teams. Proven experience in CRM administration, preferably with expertise in managing Veeva CRM. Proven delivery of outstanding results. Excellent problem-solving skills and attention to detail. Ability to leverage networks, influencing and leading projects. Ability to lead change while achieving business goals and objectives, act for change, challenging continuously the status quo. High persistency and resilience. Knowledge Robust knowledge on “VEEVA CRM”, “Veeva 4M” and “VEEVA ALIGN” for all user roles (front and back-office). Good understanding about Veeva Vault, 1CRM, Veeva 4MPromo Mat; Effective understanding on content structure Excellent English language knowledge and skills (written and oral), IT knowledge and skills, proven impactful communication, presentation, persuasion, skills ability to work cross-functionally. Experience in having deployed transformational GTM solutions and new customer facing tools implementation. Skills And Competencies Business: Numerate and analytical skills; Ability to prioritize; Robust knowledge in Digital, IT and CRM; Ability to work on their own initiative and make quality decisions; Excellent interpersonal skills to communicate, present, persuade and argument among all GBUs teams and partners. Leadership: Leads by example and walks the talk; Role models Play-To-Win principles and behaviours: Engages others through active and impactful communication; Demonstrates a high level drive, passion and ambition for high performance; Challenges continuously the status quo; Develops fresh approaches in order to deliver results; Has well-developed time management skills, mastering in prioritizing tasks and planning own workloads to ensure deadlines and desired results are met. Networking: Is a strong relationship builder; Seeks out new opportunities; Demonstrates teamworking and shares best practices always; Has experience of successfully leading projects in multicultural environments and in a matrix organization. Education : Graduate/Postgraduate or Advanced degree in areas such as Management/Statistics/Decision Sciences/Engineering/Life Sciences/Business Analytics or related field (e.g., PhD / MBA / Masters) Languages : Excellent knowledge in English and strong communication skills – written and spoken Personal Characteristics Hands-on, accountability, creativity, initiative, high persistence and resilience, stress management, learning agility, result orientation, ability to work on one’s own, continuous improvement, listening skills, empathy to understand the needs of the different businesses within distinct geographies. Why chose us? Bring the miracles of science to life alongside a supportive, future-focused team. Discover endless opportunities to grow your talent and drive your career, whether it’s through a promotion or lateral move, at home or internationally. Enjoy a thoughtful, well-crafted rewards package that recognizes your contribution and amplifies your impact. Take good care of yourself and your family, with a wide range of health and wellbeing benefits including high-quality healthcare, prevention and wellness programs and at least 14 weeks’ gender-neutral parental leave. Play an instrumental part in creating best practice within our Go-to-Market Capabilities. Pursue progress, discover extraordinary Progress doesn’t happen without people – people from different backgrounds, in different locations, doing different roles, all united by one thing: a desire to make miracles happen. You can be one of those people. Chasing change, embracing new ideas and exploring all the opportunities we have to offer. Let’s pursue progress. And let’s discover extraordinary together. At Sanofi, we provide equal opportunities to all regardless of race, colour, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, or gender identity. Watch our ALL IN video and check out our Diversity Equity and Inclusion actions at sanofi.com! null Pursue Progress . Discover Extraordinary . Join Sanofi and step into a new era of science - where your growth can be just as transformative as the work we do. We invest in you to reach further, think faster, and do what’s never-been-done-before. You’ll help push boundaries, challenge convention, and build smarter solutions that reach the communities we serve. Ready to chase the miracles of science and improve people’s lives? Let’s Pursue Progress and Discover Extraordinary – together. At Sanofi, we provide equal opportunities to all regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, protected veteran status or other characteristics protected by law.
Posted 4 days ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About The Job We are seeking a skilled Data Platform Engineer with expertise in High-Performance Computing (HPC) and cloud computing to support our scientific research activities globally (Canada, US, France and Germany mostly). The ideal candidate will have experience in managing and optimizing Linux-based HPC environments, as well as proficiency in AWS cloud services. This role involves collaborating with various R&D groups to provide technical support and drive continuous improvements in our computing infrastructure. The candidate should be adept at handling both open-source and commercial software across different R&D fields. What You Will Be Doing Support in-silico activities in the Boston area, including the installation, configuration, and optimization of Linux workstations and applications. Provide continuous improvements and maintenance of the current Linux environments. Manage and optimize AWS cloud resources, including key services such as Amazon FSx for Lustre, EC2, S3… Collaborate with research teams to understand their computational needs and provide tailored solutions. Ensure the security, scalability, and efficiency of cloud-based scientific workflows. Troubleshoot and resolve technical issues in both on-premises and cloud environments. Handle the compilation, installation, and maintenance of open-source software and commercial applications. Stay updated with the latest advancements in HPC and cloud technologies to recommend and implement improvements. Main responsibilities: - Proven experience in managing and optimizing Linux-based HPC environments. Strong proficiency in AWS cloud services, particularly Amazon FSx for Lustre, EC2, S3. Knowledge of cloud architecture, including network design, storage solutions, and security best practices. Familiarity with scripting languages such as Bash, Python, or Perl for automation and system administration tasks. Experience with containerization technologies like Docker and orchestration tools like Kubernetes. Ability to compile, install, and maintain open-source software and commercial applications. Strong problem-solving skills and the ability to work independently and in a team. Excellent communication skills to collaborate effectively with researchers and technical teams. Bachelor's or Master's degree in Computer Science, Engineering, or a related field. Preferred Qualifications Experience with other cloud platforms (e.g., Google Cloud, Azure). Knowledge of bioinformatics or scientific computing workflows. Experience in working with HPC schedulers (SLURM, PBS, Grid Engine etc…) Familiarity with Infrastructure as Code (IaC) tools like Terraform or CloudFormation. Certifications in AWS or other cloud platforms. Experience with software tools in various R&D fields, such as: Drug Design and Molecular Modeling: Schrödinger, Moe, Amber, Gromacs, NAMD, AlphaFold. Genomics and Data Analysis: NGS pipelines (Cellranger), KNIME, R/RStudio/RShiny. Pharmacokinetics and Clinical Simulations: Monolix, Matlab, R/RStudio, Julia. Structural Biology and Imaging: CryoSparc, Relion, CCP4, Pymol. Why choose us? Bring the miracles of science to life alongside a supportive, future-focused team. Discover endless opportunities to grow your talent and drive your career, whether it’s through a promotion or lateral move, at home or internationally. Enjoy a thoughtful, well-crafted rewards package that recognizes your contribution and amplifies your impact. Take good care of yourself and your family, with a wide range of health and wellbeing benefits including high-quality healthcare, prevention and wellness programs and at least 14 weeks’ gender-neutral parental leave. Opportunity to work in an international environment, collaborating with diverse business teams and vendors, working in a dynamic team, and fully empowered to propose and implement innovative ideas. Pursue Progress . Discover Extraordinary . Progress doesn’t happen without people – people from different backgrounds, in different locations, doing different roles, all united by one thing: a desire to make miracles happen. You can be one of those people. Chasing change, embracing new ideas and exploring all the opportunities we have to offer. Let’s pursue progress. And let’s discover extraordinary together. At Sanofi, we provide equal opportunities to all regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, or gender identity. Watch our ALL IN video and check out our Diversity Equity and Inclusion actions at sanofi.com! null Pursue Progress . Discover Extraordinary . Join Sanofi and step into a new era of science - where your growth can be just as transformative as the work we do. We invest in you to reach further, think faster, and do what’s never-been-done-before. You’ll help push boundaries, challenge convention, and build smarter solutions that reach the communities we serve. Ready to chase the miracles of science and improve people’s lives? Let’s Pursue Progress and Discover Extraordinary – together. At Sanofi, we provide equal opportunities to all regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, protected veteran status or other characteristics protected by law.
Posted 4 days ago
3.0 years
0 Lacs
India
On-site
We need an experienced DevOps Engineer to single-handedly build our Automated Provisioning Service on Google Cloud Platform. You'll implement infrastructure automation that provisions complete cloud environments for B2B customers in under 10 minutes. Core Responsibilities: Infrastructure as Code Implementation Develop Terraform modules for automated GCP resource provisioning Create reusable templates for: GKE cluster deployment with predefined node pools Cloud Storage bucket configuration Cloud DNS and SSL certificate automation IAM roles and service account setup Implement state management and version control for IaC Automation & Orchestration Build Cloud Functions or Cloud Build triggers for provisioning workflows Create automation scripts (Bash/Python) for deployment orchestration Deploy containerized Node.js applications to GKE using Helm charts Configure automated SSL certificate provisioning via Certificate Manager Security & Access Control Implement IAM policies and RBAC for customer isolation Configure secure service accounts with minimal required permissions Set up audit logging and monitoring for all provisioned resources Integration & Deployment Create webhook endpoints to receive provisioning requests from frontend Implement provisioning status tracking and error handling Document deployment procedures and troubleshooting guides Ensure 5-10 minute provisioning time SLA Required Skills & Certifications: MANDATORY Certification (Must have one of the following): Google Cloud Associate Cloud Engineer (minimum requirement) Google Cloud Professional Cloud DevOps Engineer (preferred) Google Cloud Professional Cloud Architect (preferred) Technical Skills (Must Have): 3+ years hands-on experience with Google Cloud Platform Strong Terraform expertise with proven track record GKE/Kubernetes deployment and management experience Proficiency in Bash and Python scripting Experience with CI/CD pipelines (Cloud Build preferred) GCP IAM and security best practices knowledge Ability to work independently with minimal supervision Nice to Have: Experience developing RESTful APIs for service integration Experience with multi-tenant architectures Node.js/Docker containerization experience Helm chart creation and management Deliverables (2-Month Timeline) Month 1: Complete Terraform modules for all GCP resources Working prototype of automated provisioning flow Basic IAM and security implementation Integration with webhook triggers Month 2: Production-ready deployment with error handling Performance optimization (achieve <10 min provisioning) Complete documentation and runbooks Handover and knowledge transfer Technical Environment Primary Tools: Terraform, GCP (GKE, Cloud Storage, Cloud DNS, IAM) Languages: Bash, Python (automation scripts) Orchestration: Cloud Build, Cloud Functions Containerization: Docker, Kubernetes, Helm Ideal Candidate Self-starter who can own the entire DevOps scope independently Strong problem-solver comfortable with ambiguity Excellent time management skills to meet tight deadlines Clear communicator who documents their work thoroughly Important Note: Google Cloud certification is mandatory for this position due to partnership requirements. Please include your certification details and ID number in your application. Application Requirements: Proof of valid Google Cloud certification Examples of similar GCP automation projects GitHub/GitLab links to relevant Terraform modules (if available)
Posted 4 days ago
6.0 years
18 - 30 Lacs
India
On-site
Role: Senior Database Administrator (DevOps) Experience: 7+ Type: Contract Job Summary We are seeking a highly skilled and experienced Database Administrator with a minimum of 6 years of hands-on experience managing complex, high-performance, and secure database environments. This role is pivotal in maintaining and optimizing our multi-platform database infrastructure , which includes PostgreSQL, MariaDB/MySQL, MongoDB, MS SQL Server , and AWS RDS/Aurora instances. You will be working primarily within Linux-based production systems (e.g., RHEL 9.x) and will play a vital role in collaborating with DevOps, Infrastructure, and Data Engineering teams to ensure seamless database performance across environments. The ideal candidate has strong experience with infrastructure automation tools like Terraform and Ansible , is proficient with Docker , and is well-versed in cloud environments , particularly AWS . This is a critical role where your efforts will directly impact system stability, scalability, and security across all environments. Key Responsibilities Design, deploy, monitor, and manage databases across production and staging environments. Ensure high availability, performance, and data integrity for mission-critical systems. Automate database provisioning, configuration, and maintenance using Terraform and Ansible. Administer Linux-based systems for database operations with an emphasis on system reliability and uptime. Establish and maintain monitoring systems, set up proactive alerts, and rapidly respond to performance issues or incidents. Work closely with DevOps and Data Engineering teams to integrate infrastructure with MLOps and CI/CD pipelines. Implement and enforce database security best practices, including data encryption, user access control, and auditing. Conduct root cause analysis and tuning to continuously improve database performance and reduce downtime. Required Technical Skills Database Expertise: PostgreSQL: Advanced skills in replication, tuning, backup/recovery, partitioning, and logical/physical architecture. MariaDB/MySQL: Proven experience in high availability configurations, schema optimization, and performance tuning. MongoDB: Strong understanding of NoSQL structures, including indexing strategies, replica sets, and sharding. MS SQL Server: Capable of managing and maintaining enterprise-grade MS SQL Server environments. AWS RDS & Aurora: Deep familiarity with provisioning, monitoring, auto-scaling, snapshot management, and failover handling. Infrastructure & DevOps 6+ years of experience as a Database Administrator or DevOps Engineer in Linux-based environments. Hands-on expertise with Terraform, Ansible, and Infrastructure as Code (IaC) best practices. Knowledge of networking principles, firewalls, VPCs, and security hardening. Experience with monitoring tools such as Datadog, Splunk, SignalFx, and PagerDuty for observability and alerting. Strong working experience with AWS Cloud Services (EC2, VPC, IAM, CloudWatch, S3, etc.). Exposure to other cloud providers like GCP, Azure, or IBM Cloud is a plus. Familiarity with Docker, container orchestration, and integrating databases into containerized environments. Preferred Qualifications Excellent analytical and troubleshooting skills. Strong verbal and written communication skills. Ability to collaborate in cross-functional teams and drive initiatives independently. A passion for automation, observability, and scalability in production-grade environments. Must Have: AWS, Ansible, DevOps, Terraform Skills: postgresql,mariadb,datadog,containerization,networking,linux,mongodb,devops,terraform,aws aurora,cloud services,amazon web services (aws),ms sql server,ansible,aws,mysql,aws rds,docker,infrastructure,database
Posted 4 days ago
12.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
To get the best candidate experience, please consider applying for a maximum of 3 roles within 12 months to ensure you are not duplicating efforts. Job Category Software Engineering Job Details About Salesforce Salesforce is the #1 AI CRM, where humans with agents drive customer success together. Here, ambition meets action. Tech meets trust. And innovation isn’t a buzzword — it’s a way of life. The world of work as we know it is changing and we're looking for Trailblazers who are passionate about bettering business and the world through AI, driving innovation, and keeping Salesforce's core values at the heart of it all. Ready to level-up your career at the company leading workforce transformation in the agentic era? You’re in the right place! Agentforce is the future of AI, and you are the future of Salesforce. As an engineering leader, you will focus on developing the team around you. Bring your technical chops to drive your teams to success around feature delivery and live-site management for a complex cloud infrastructure service. You are as enthusiastic about recruiting and building a team as you are about challenging technical problems that your team will solve. You will also help shape, direct and execute our product vision. You’ll be challenged to blend customer-centric principles, industry-changing innovation, and the reliable delivery of new technologies. You will work directly with engineering, product, and design, to create experiences that reinforce the Salesforce brand by delighting and wowing our customers with highly reliable and available services. Responsibilities Drive the vision of enabling a full suite of Salesforce applications on Google Cloud in collaboration with teams across geographies. Build and lead a team of engineers to deliver cloud framweoks, infrastructure automation tools, workflows, and validation platforms on our public cloud platforms. Solid experience in building and evolving large scale distributed systems to reliably process billions of data points Proactively identify reliability & data quality problems and drive triaging and remediation process. Invest in continuous employee development of a highly technical team by mentoring and coaching engineers and technical leads in the team. Recruit and attract top talent. Drive execution and delivery by collaborating with cross functional teams, architects, product owners and engineers. Experience managing 2+ engineering teams. Experience building services on public cloud platforms like GCP, AWS, Azure Required Skills/Experiences B.S/M.S. in Computer Sciences or equivalent field. 12+ years of relevant experience in software development teams with 5+ years of experience managing teams Passionate, curious, creative, self-starter and approach problems with right methodology and intelligent decisions. Laser focus on impact, balancing effort to value, and getting things done. Experience providing mentorship, technical leadership, and guidance to team members. Strong customer service orientation and a desire to help others succeed. Top notch written and oral communication skills. Desired Skills/Experiences Working knowledge of modern technologies/services on public cloud is desirable Experience with container orchestration systems Kubernetes, Docker, Helios, Fleet Expertise in open source technologies like Elastic Search, Logstash, Kakfa, MongoDB, Hadoop, Spark, Trino/Presto, Hive, Airflow, Splunk Benefits & Perks Comprehensive benefits package including well-being reimbursement, generous parental leave, adoption assistance, fertility benefits, and more! World-class enablement and on-demand training with Trailhead.com Exposure to executive thought leaders and regular 1:1 coaching with leadership Volunteer opportunities and participation in our 1:1:1 model for giving back to the community For more details, visit https://www.salesforcebenefits.com/ Unleash Your Potential When you join Salesforce, you’ll be limitless in all areas of your life. Our benefits and resources support you to find balance and be your best , and our AI agents accelerate your impact so you can do your best . Together, we’ll bring the power of Agentforce to organizations of all sizes and deliver amazing experiences that customers love. Apply today to not only shape the future — but to redefine what’s possible — for yourself, for AI, and the world. Accommodations If you require assistance due to a disability applying for open positions please submit a request via this Accommodations Request Form. Posting Statement Salesforce is an equal opportunity employer and maintains a policy of non-discrimination with all employees and applicants for employment. What does that mean exactly? It means that at Salesforce, we believe in equality for all. And we believe we can lead the path to equality in part by creating a workplace that’s inclusive, and free from discrimination. Know your rights: workplace discrimination is illegal. Any employee or potential employee will be assessed on the basis of merit, competence and qualifications – without regard to race, religion, color, national origin, sex, sexual orientation, gender expression or identity, transgender status, age, disability, veteran or marital status, political viewpoint, or other classifications protected by law. This policy applies to current and prospective employees, no matter where they are in their Salesforce employment journey. It also applies to recruiting, hiring, job assignment, compensation, promotion, benefits, training, assessment of job performance, discipline, termination, and everything in between. Recruiting, hiring, and promotion decisions at Salesforce are fair and based on merit. The same goes for compensation, benefits, promotions, transfers, reduction in workforce, recall, training, and education.
Posted 4 days ago
0 years
0 Lacs
Kozhikode, Kerala, India
On-site
Pfactorial Technologies is a fast-growing AI/ML/NLP company at the forefront of innovation in Generative AI, voice technology, and intelligent automation. We specialize in building next-gen solutions using LLMs, agent frameworks, and custom ML pipelines. Join our dynamic team to work on real-world challenges and shape the future of AI driven systems and smart automation.. We are looking for AI/ML Engineer – LLMs, Voice Agents & Workflow Automation (0–3Yrs Experience ) Experience with LLM integration pipelines (OpenAI, Vertex AI, Hugging Face models) Hands on experience in working with voice agents, TTS, STT, caching mechanisms, and ElevenLabs voice technology Strong understanding of vector databases like Qdrant or Milvus Hands-on experience with Langchain, LlamaIndex, or agent frameworks (e.g., AutoGen, CrewAI) Knowledge of FastAPI, Celery, and orchestration of ML/AI services Familiarity with cloud deployment on GCP, AWS, or Azure Ability to build and fine-tune matching, ranking, or retrieval-based models Developing agentic workflows for automation Implementing NLP pipelines for parsing, summarizing, and communication (e.g., email bots, script generators) Comfortable working with graph-based data representation and integrating with frontend Experience in multi-agent collaboration frameworks like Google Agent2Agent Practical experience in data scraping and enrichment for ML training datasets Understanding of compliance in AI applications 👉 For more updates, follow us on our LinkedIn page! https://in.linkedin.com/company/pfactorial
Posted 4 days ago
2.0 - 1.0 years
0 Lacs
Ahmedabad, Gujarat
On-site
MERN Stack Developer | Work From Office | Ahmedabad reverseBits is seeking a talented MERN developer to join our team. We are looking for someone with 2-5 years of experience in MERN stack development. You will develop and maintain high-quality web applications built in Javascript based frameworks and Relational & NoSQL databases for high-scale products/systems. Responsibilities: Collaborate with cross-functional teams to understand business requirements and translate them into web application features. Develop and maintain production systems and databases and collaborate with the DevOps team for cloud operations. Write clean, efficient, and reusable code following coding standards and best practices. Conduct thorough testing and debugging to ensure system functionality and performance. Stay updated with the latest trends and technologies in Javascript development to enhance your technical skills. Troubleshoot and resolve issues reported by users and stakeholders Participate in code reviews to maintain code quality and improve team productivity Troubleshooting and resolving issues in production environments, ensuring high availability and minimal downtime. Skills and Qualifications: At least 2 years of professional hands-on experience in NextJS, NestJS and React JS Bachelor's or Master's degree in Computer Science, Software Engineering, or a related field. Strong proficiency in Javascript backend frameworks (Node JS and Express JS). Experience designing and developing RESTful APIs and Microservices architecture. Solid understanding of database systems (MongoDB, MySQL) and data modelling concepts. Basic Familiarity with cloud platforms such as AWS. Hands-on experience in MongoDB is must Proficiency in version control systems (Git) and collaborative development workflows. Excellent problem-solving skills and a proactive attitude toward addressing challenges. Strong communication skills and ability to work effectively in a collaborative team environment. Prior experience working in an Agile/Scrum development environment. Bonus points if you have... Experience with containerization and orchestration tools (Docker, Kubernetes) is a plus. Apply here - https://tally.so/r/nGlzvL, to be considered for the role Job Types: Full-time, Permanent Pay: From ₹40,000.00 per month Schedule: Day shift Ability to commute/relocate: Ahmedabad, Gujarat: Reliably commute or planning to relocate before starting work (Required) Education: Bachelor's (Required) Experience: MERN Stack Developer: 2 years (Required) TypeScript: 1 year (Required) Location: Ahmedabad, Gujarat (Required) Work Location: In person
Posted 4 days ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Hi Connections, Urgent - Hiring for below role About the Role: We are seeking a seasoned and highly skilled MLOps Engineer to join our growing team. The ideal candidate will have extensive hands-on experience with deploying, monitoring, and retraining machine learning models in production environments. You will be responsible for building and maintaining robust and scalable MLOps pipelines using tools like MLflow, Apache Airflow, Kubernetes, and Databricks or Azure ML. A strong understanding of infrastructure-as-code using Terraform is essential. You will play a key role in operationalizing AI/ML systems and ensuring high performance, availability, and automation across the ML lifecycle. --- Key Responsibilities: · Design and implement scalable MLOps pipelines for model training, validation, deployment, and monitoring. · Operationalize machine learning models using MLflow, Airflow, and containerized deployments via Kubernetes. · Automate and manage ML workflows across cloud platforms such as Azure ML or Databricks. · Develop infrastructure using Terraform for consistent and repeatable deployments. · Trace API calls to LLMs, Azure OCR and Paradigm · Implement performance monitoring, alerting, and logging for deployed models using custom and third-party tools. · Automate model retraining and continuous deployment pipelines based on data drift and model performance metrics. · Ensure traceability, reproducibility, and auditability of ML experiments and deployments. · Collaborate with Data Scientists, ML Engineers, and DevOps teams to streamline ML workflows. · Apply CI/CD practices and version control to the entire ML lifecycle. · Ensure secure, reliable, and compliant deployment of models in production environments. --- Required Qualifications: · 5+ years of experience in MLOps, DevOps, or ML engineering roles, with a focus on production ML systems. · Proven experience deploying machine learning models using MLflow and workflow orchestration with Apache Airflow. · Hands-on experience with Kubernetes for container orchestration in ML deployments. · Proficiency with Databricks and/or Azure ML, including model training and deployment capabilities. · Solid understanding and practical experience with Terraform for infrastructure-as-code. · Experience automating model monitoring and retraining processes based on data and model drift. · Knowledge of CI/CD tools and principles applied to ML systems. · Familiarity with monitoring tools and observability stacks (e.g., Prometheus, Grafana, Azure Monitor). · Strong scripting skills in Python · Deep understanding of ML lifecycle challenges including model versioning, rollback, and scaling. · Excellent communication skills and ability to collaborate across technical and non-technical teams. --- Nice to Have: · Experience with Azure DevOps or GitHub Actions for ML CI/CD. · Exposure to model performance optimization and A/B testing in production environments. · Familiarity with feature stores and online inference frameworks. · Knowledge of data governance and ML compliance frameworks. · Experience with ML libraries like scikit-learn, PyTorch, or TensorFlow. --- Education: · Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, or a related field.
Posted 4 days ago
7.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Manager Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Job Description & Summary: A career within PWC Responsibilities Job Title: Cloud Engineer (Java 17+, Spring Boot, Microservices, AWS) Job Type: Full-Time Job Overview: As a Cloud Engineer, you will be responsible for developing, deploying, and managing cloud-based applications and services on AWS. You will use your expertise in Java 17+, Spring Boot, and Microservices to build robust and scalable cloud solutions. This role will involve working closely with development teams to ensure seamless cloud integration, optimizing cloud resources, and leveraging AWS tools to ensure high availability, security, and performance. Key Responsibilities: Cloud Infrastructure: Design, build, and deploy cloud-native applications on AWS, utilizing services such as EC2, S3, Lambda, RDS, EKS, API Gateway, and CloudFormation. Backend Development: Develop and maintain backend services and microservices using Java 17+ and Spring Boot, ensuring they are optimized for the cloud environment. Microservices Architecture: Architect and implement microservices-based solutions that are scalable, secure, and resilient, ensuring they align with AWS best practices. CI/CD Pipelines: Set up and manage automated CI/CD pipelines using tools like Jenkins, GitLab CI, or AWS CodePipeline for continuous integration and deployment. AWS Services Integration: Integrate AWS services such as DynamoDB, SQS, SNS, CloudWatch, and Elastic Load Balancing into microservices to improve performance and scalability. Performance Optimization: Monitor and optimize the performance of cloud infrastructure and services, ensuring efficient resource utilization and cost management in AWS. Security: Implement security best practices in cloud applications and services, including IAM roles, VPC configuration, encryption, and authentication mechanisms. Troubleshooting & Support: Provide ongoing support and troubleshooting for cloud-based applications, ensuring uptime, availability, and optimal performance. Collaboration: Work closely with cross-functional teams, including frontend developers, system administrators, and DevOps engineers, to ensure end-to-end solution delivery. Documentation: Document the architecture, implementation, and operations of cloud infrastructure and applications to ensure knowledge sharing and compliance. Required Skills & Qualifications: Strong experience with Java 17+ (latest version) and Spring Boot for backend development. Hands-on experience with AWS Cloud services such as EC2, S3, Lambda, RDS, EKS, API Gateway, DynamoDB, SQS, SNS, and CloudWatch. Proven experience in designing and implementing microservices architectures. Solid understanding of cloud security practices, including IAM, VPC, encryption, and secure cloud-native application development. Experience with CI/CD tools and practices (e.g., Jenkins, GitLab CI, AWS CodePipeline). Familiarity with containerization technologies like Docker, and orchestration tools like Kubernetes. Ability to optimize cloud applications for performance, scalability, and cost-efficiency. Experience with monitoring and logging tools like CloudWatch, ELK Stack, or other AWS-native tools. Knowledge of RESTful APIs and API Gateway for exposing microservices. Solid understanding of version control systems like Git and familiarity with Agile methodologies. Strong problem-solving and troubleshooting skills, with the ability to work in a fast-paced environment. Preferred Skills: AWS certifications, such as AWS Certified Solutions Architect or AWS Certified Developer. Experience with Terraform or AWS CloudFormation for infrastructure as code. Familiarity with Kubernetes and EKS for container orchestration in the cloud. Experience with serverless architectures using AWS Lambda. Knowledge of message queues (e.g., SQS, Kafka) and event-driven architectures. Education & Experience: Bachelor’s degree in Computer Science, Engineering, or related field, or equivalent practical experience. 7-11 years of experience in software development with a focus on AWS cloud and microservices. Mandatory Skill Sets Cloud Engineer (Java+Springboot+ AWS) Preferred Skill Sets Cloud Engineer (Java+Springboot+ AWS) Years Of Experience Required 7-11 years Education Qualification BE/BTECH, ME/MTECH, MBA, MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Technology, Bachelor of Engineering, Master of Engineering, Master of Business Administration Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Cloud Engineering Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Coaching and Feedback, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling {+ 33 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date
Posted 4 days ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Operations Management Level Associate Job Description & Summary At PwC, our people in business application consulting specialise in consulting services for a variety of business applications, helping clients optimise operational efficiency. These individuals analyse client needs, implement software solutions, and provide training and support for seamless integration and utilisation of business applications, enabling clients to achieve their strategic objectives. As a business application consulting generalist at PwC, you will provide consulting services for a wide range of business applications. You will leverage a broad understanding of various software solutions to assist clients in optimising operational efficiency through analysis, implementation, training, and support. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. J Job Description & Summary: A career within Enterprise Architecture services, will provide you with the opportunity to bring our clients a competitive advantage through defining their technology objectives, assessing solution options, and devising architectural solutions that help them achieve both strategic goals and meet operational requirements. We help build software and design data platforms, manage large volumes of client data, develop compliance procedures for data management, and continually researching new technologies to drive innovation and sustainable change. Responsibilities Design solutions for cloud (e.g. AWS, Azure and GCP) which are optimal, secure, efficient,scalable, resilient and reliable, and at the same time. are compliant with Industry cloud standards and policies. +Design strategies and tools to deploy, monitor, and administer cloud applications and the underlying services for cloud (e.g. Azure, AWS, GCP and private cloud) +Should have experience and perform Cloud Deployment, Containerization, movement of Applications from On-premise to Cloud, Cloud Migration approach, SaaS/PaaS/IaaS. +Should have experience on Infra set-up, Availability Zones, Cloud Services deployment, connectivity set-up inline with AWS, Azure, GCP and OCI +Should have skill set around GCP, AWS, Oracle Cloud and Azure and Multi Cloud Strategy Excellent hands-on experience in implementation and design of Cloud infrastructure environments using modern CICD deployment patterns with Terraform, Jenkins, and Git. Strong understanding of application build and Deployments with CICD pipelines. Mandatory Skill Sets Architect & Design solutions for cloud (AWS, Azure, GCP and private cloud), Should have experience and perform Cloud Deployment, Containerization, movement of Applications from On-premise to Cloud, Cloud Migration approach, SaaS/PaaS/IaaS... Design of Cloud infrastructure environments...application containerization and orchestration with Docker and Kubernetes in Cloud. Strong experience application containerization and orchestration with Docker and Kubernetes in Cloud Platforms. Preferred Skill Sets Certification would be preferred in AWS, Azure, GCP and private cloud, Kubernetes. Years Of Experience Required 3+ years Education Qualification B.E./ B.Tech / MCA/ M.E/ M.TECH/ MBA/ PGDM/ B.SC - IT. All qualifications should be in regular full-time mode with no extension of course duration due to backlogs Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor Degree, Master Degree Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills HCI Research Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Reasoning, Application Software, Business Data Analytics, Business Management, Business Technology, Business Transformation, Communication, Documentation Development, Emotional Regulation, Empathy, Implementation Research, Implementation Support, Implementing Technology, Inclusion, Intellectual Curiosity, Optimism, Performance Assessment, Performance Management Software, Problem Solving, Product Management, Product Operations, Project Delivery {+ 11 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date August 9, 2025
Posted 4 days ago
1.0 years
0 Lacs
Gurugram, Haryana, India
On-site
At NIIT Managed Training Services, we’re transforming the way the world learns, for the better. That’s why the world’s best-run learning functions across 30 countries trust us with their learning and talent. Since 1981, we have helped leading companies transform their learning ecosystems while increasing the business value of learning. Our comprehensive, high-impact managed learning solutions weave together the best of learning theory, technology, operations, and services to enable a thriving workforce. Link for our website: - https://www.niit.com/en/learning-outsourcing/services/ We are hiring for Python Developer - AI & Web Applications This is 1 year contract mode role which can be extend based on performance & project requirement. 5 days working from Office, Gurgaon-sector 34. General Shift About This Role We are seeking a skilled Python Developer to join our innovative team, focusing on AI applications, autonomous agent development, and web-based solutions. This role offers the opportunity to work with large language models, build interactive web applications using modern Python frameworks, and develop autonomous systems that can perform complex tasks. Key Responsibilities Development & Architecture • Design and develop web applications using FastAPI, Flask, and Gradio frameworks • Create interactive user interfaces combining Python backends with HTML/CSS frontends • Build responsive web applications with seamless integration between frontend and backend components • Deploy and manage applications using Azure cloud services for scalability and reliability • Develop and integrate autonomous agent systems for automated task execution and decision-making AI & Machine Learning • Work with large language models (LLMs) from various providers including OpenAI, Anthropic, and Hugging Face • Implement LLM-powered features including chat interfaces, content generation, and intelligent automation • Design and deploy autonomous agents capable of complex reasoning, planning, and multi-step task execution • Integrate multiple AI services and APIs to create comprehensive intelligent applications • Build agent orchestration systems for coordinated multi-agent workflows Data Engineering & Analytics • Work with large datasets, implementing robust data preprocessing, cleaning, and transformation pipelines • Collaborate with data scientists to integrate NLP-driven solutions into production applications • Optimize application performance, responsiveness, and scalability for enterprise-level deployments • Ensure data security, privacy, and compliance throughout the development lifecycle Collaboration & Best Practices • Partner with cross-functional teams to understand business requirements and translate them into technical solutions • Conduct comprehensive code reviews and maintain high standards for code quality and documentation • Stay current with latest advancements in AI, autonomous systems, NLP, and cloud technologies • Mentor junior developers and contribute to technical decision-making processes Required Qualifications & Experience • Bachelor's degree in Computer Science, Software Engineering, Web Development, or related field • 3+ years of proven experience as a Python Developer with expertise in web frameworks • Demonstrated experience building web applications with FastAPI, Flask, and Gradio NIIT is an equal opportunity employer. We evaluate qualified applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, veteran status, or any other protected characteristic.
Posted 4 days ago
10.0 years
0 Lacs
Noida, Uttar Pradesh, India
Remote
Solution Architect (India) Work Mode: Remote/ Hybrid Required exp: 10+ years Shift timing: Minimum 4 hours overlap required with US time Role Summary: The Solution Architect is responsible for designing robust, scalable, and high- performance AI and data-driven systems that align with enterprise goals. This role serves as a critical technical leader—bridging AI/ML, data engineering, ETL, cloud architecture, and application development. The ideal candidate will have deep experience across traditional and generative AI, including Retrieval- Augmented Generation (RAG) and agentic AI systems, along with strong fundamentals in data science, modern cloud platforms, and full-stack integration. Key Responsibilities: Design and own the end-to-end architecture of intelligent systems including data ingestion (ETL/ELT), transformation, storage, modeling, inferencing, and reporting. Architect GenAI-powered applications using LLMs, vector databases, and RAG pipelines; Agentic Workflow, integrate with enterprise knowledge graphs and document repositories. Lead the design and deployment of agentic AI systems that can plan, reason, and interact autonomously within business workflows. Collaborate with cross-functional teams including data scientists, data engineers, MLOps, and frontend/backend developers to deliver scalable and maintainable solutions. Define patterns and best practices for traditional ML and GenAI projects, covering model governance, explainability, reusability, and lifecycle management. Ensure seamless integration of ML/AI systems via RESTful APIs with frontend interfaces (e.g., dashboards, portals) and backend systems (e.g., CRMs, ERPs). Architect multi-cloud or hybrid cloud AI solutions, leveraging services from AWS, Azure, or GCP for scalable compute, storage, orchestration, and deployment. Provide technical oversight for data pipelines (batch and real-time), data lakes, and ETL frameworks ensuring secure and governed data movement. Conduct architecture reviews, mentor engineering teams, and drive design standards for AI/ML, data engineering, and software integration. Qualifications : Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. 10+ years of experience in software architecture, including at least 4 years in AI/ML-focused roles. Required Skills: Expertise in machine learning (regression, classification, clustering), deep learning (CNNs, RNNs, transformers), and NLP. Experience with Generative AI frameworks and services (e.g., OpenAI, LangChain, Azure OpenAI, Amazon Bedrock). Strong hands-on Python skills, with experience in libraries such as Scikit-learn, Pandas, NumPy, TensorFlow, or PyTorch. Proficiency in RESTful API development and integration with frontend components (React, Angular, or similar is a plus). Deep experience in ETL/ELT processes using tools like Apache Airflow, Azure Data Factory, or AWS Glue. Strong knowledge of cloud-native architecture and AI/ML services on either one of the cloud AWS, Azure, or GCP. Experience with vector databases (e.g., Pinecone, FAISS, Weaviate) and semantic search patterns. Experience in deploying and managing ML models with MLOps frameworks (MLflow, Kubeflow). Understanding of microservices architecture, API gateways, and container orchestration (Docker, Kubernetes). Having forntend exp is good to have.
Posted 4 days ago
30.0 years
0 Lacs
Durgapur, West Bengal, India
On-site
Company Overview Pinnacle Infotech values inclusive growth in an agile, diverse environment. With 30+ years of global experience, 3,400+ experts completed 15,000+ projects across 43+ countries for 5,000+ clients. Join us for rapid advancement, cutting-edge training, and impactful global projects. Embrace E.A.R.T.H. values, celebrate uniqueness, and drive swift career growth with Pinnaclites! Job Title: MLOps Engineer Job Summary: As an MLOps Engineer, you will be responsible for building, deploying, and maintaining the infrastructure required for machine learning models and ETL data pipelines. You will work closely with data scientists, and software developers to streamline our machine learning operations, manage data workflows, and ensure that the ML solutions are scalable, reliable, and secure. Location- Durgapur/Jaipur/Madurai Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, Engineering, or a related field. 3+ years of experience in MLOps, data engineering, or a similar role. Proficiency in programming languages such as Python, Spark, and SQL. Experience with ML model deployment frameworks and tools (e.g., MLflow). Hands-on experience with cloud platforms (AWS, Azure, GCP) and infrastructure management. Familiarity with containerization and orchestration tools (Docker, Kubernetes). Understanding of DevOps practices, CI/CD pipelines, and monitoring tools. Excellent problem-solving skills and the ability to work independently and as part of a team. Key Responsibilities: Data Engineering and Pipeline Management: Design, develop, optimize, and maintain ETL processes and data pipelines to collect, process, and store data from multiple sources. Ensure data quality, integrity, and consistency across various databases. Collaborate with data scientists to make data available in the right format and speed for machine learning. Implement and manage data security and privacy protocols as per industry standards. ML Operations and Deployment: Design, build, and optimize scalable and reliable ML deployment pipelines. Develop CI/CD pipelines for automated ML model training, testing, and deployment. Implement and manage containerization (Docker) and orchestration tools (e.g., Kubernetes) for ML workflows. Monitor and troubleshoot model performance and infrastructure, ensuring smooth operation in production environments. Infrastructure Management: Manage cloud infrastructure (AWS, GCP, Azure) to support data and ML operations. Optimize and scale ML and data workflows to handle large-scale datasets. Set up and manage monitoring tools for infrastructure and application performance. Collaboration and Best Practices: Work closely with data science, software development, and product teams to understand project needs and optimize model performance. Develop and document best practices, guidelines, and protocols for ML lifecycle management. Interested candidates, please share your resume at sunitas@pinnacleinfotech.com
Posted 4 days ago
5.0 years
0 Lacs
Telangana, India
On-site
Ignite the Future of Language with AI at Teradata! What You'll Do: Shape the Way the World Understands Data At Teradata, we're not just managing data; we're unleashing its full potential. Our ClearScape Analytics™ platform and pioneering Enterprise Vector Store are empowering the world's largest enterprises to derive unprecedented value from their most complex data. We're rapidly pushing the boundaries of what's possible with Artificial Intelligence, especially in the exciting realm of autonomous and agentic systems We’re building intelligent systems that go far beyond automation — they observe, reason, adapt, and drive complex decision-making across large-scale enterprise environments. As a member of our AI engineering team, you’ll play a critical role in designing and deploying advanced AI agents that integrate deeply with business operations, turning data into insight, action, and measurable outcomes. You’ll work alongside a high-caliber team of AI researchers, engineers, and data scientists tackling some of the hardest problems in AI and enterprise software — from scalable multi-agent coordination and fine-tuned LLM applications, to real-time monitoring, drift detection, and closed-loop retraining systems. If you're passionate about building intelligent systems that are not only powerful but observable, resilient, and production-ready, this role offers the opportunity to shape the future of enterprise AI from the ground up. We are seeking a highly skilled Senior AI Engineer to drive the development and deployment of Agentic AI systems with a strong emphasis on AI observability and data platform integration. You will work at the forefront of cutting-edge AI research and its practical application—designing, implementing, and monitoring intelligent agents capable of autonomous reasoning, decision-making, and continuous learning. Who You'll Work With: Join Forces with the Best Imagine collaborating daily with some of the brightest minds in the company – individuals who champion diversity, equity, and inclusion as fundamental to our success. You'll be part of a cohesive force, laser-focused on delivering high-quality, critical, and highly visible AI/ML functionality within the Teradata Vantage platform. Your insights will directly shape the future of our intelligent data solutions. You'll report directly to the inspiring Sr. Manager, Software Engineering, who will champion your growth and empower your contributions. What Makes You a Qualified Candidate: Skills in Action Architect and implement Agentic AI systems capable of multi-step reasoning, tool use, and autonomous task execution. Build and maintain AI observability pipelines to monitor agent behavior, decision traceability, model drift, and overall system performance. Design and develop data platform components that support real-time and batch processing, data lineage, and high-availability systems for AI training and inference workflows. Integrate LLMs and multi-modal models into robust AI agents using frameworks like LangChain, OpenAI, Hugging Face, or custom stacks. Collaborate with product, research, and MLOps teams to ensure smooth integration between AI agents and user-facing applications Implement safeguards, feedback loops, and evaluation metrics to ensure AI safety, reliability, and compliance. Implement safeguards, feedback loops, and evaluation metrics to ensure AI safety, reliability, and compliance. Passion for staying current with AI research, especially in the areas of reasoning, planning, and autonomous systems. You are an excellent backend engineer who codes daily and owns systems end-to-end. Strong engineering background (Python/Java/Golang, API integration, backend frameworks) Strong system design skills and understanding of distributed systems. You’re obsessive about reliability, debuggability, and ensuring AI systems behave deterministically when needed. Hands-on experience with Machine learning & deep learning frameworks: TensorFlow, PyTorch, Scikit-learn Hands-on experience with LLMs, agent frameworks (LangChain, AutoGPT, ReAct, etc. ), and orchestration tools. Experience with AI observability tools and practices (e. g. , logging, monitoring, tracing, metrics for AI agents or ML models). Solid understanding of model performance monitoring, drift detection, and responsible AI principles. What You Bring: Passion and Potential A Bachelor's or Master's degree in Computer Science, Engineering, Data Science, or a related field – your academic foundation is key. A genuine excitement for AI and large language models (LLMs) is a significant advantage – you'll be working at the cutting edge! Design, develop, and deploy agentic systems integrated into the data platform. 5+ years of experience in software architecture, backend systems, or AI infrastructure. Strong experience with LLMs, transformers, and tools like OpenAI API, Anthropic Claude, or open-source LLMs. Deep understanding of AI observability (e. g. , tracing, monitoring, model explainability, drift detection, evaluation pipelines). Build dashboards and metrics pipelines to track key AI system indicators: latency, accuracy, tool invocation success, hallucination rate, and failure modes. Integrate observability tooling (e. g. , OpenTelemetry, Prometheus, Grafana) with LLM-based workflows and agent pipelines. Familiarity with modern data platform architecture Strong background in distributed systems, microservices, and cloud platforms (AWS, GCP, Azure). Experience in software development (Python, Go, or Java preferred). Familiarity with backend service development, APIs, and distributed systems. Familiarity with containerized environments (Docker, Kubernetes) and CI/CD pipelines. Bonus: Research experience or contributions to open-source agentic frameworks. You're knowledgeable about open-source tools and technologies and know how to leverage and extend them to build innovative solutions. Preferred Qualifications Experience with tools such as Arize AI, WhyLabs, Traceloop, or Prometheus + custom monitoring for AI/ML. Contributions to open-source agent frameworks or AI infra. Advanced degree (MS/PhD) in Computer Science, Artificial Intelligence, or related field. Experience working with multi-agent systems, real-time decision systems, or autonomous workflows. Why We Think You’ll Love Teradata We prioritize a people-first culture because we know our people are at the very heart of our success. We embrace a flexible work model because we trust our people to make decisions about how, when, and where they work. We focus on well-being because we care about our people and their ability to thrive both personally and professionally. We are an anti-racist company because our dedication to Diversity, Equity, and Inclusion is more than a statement. It is a deep commitment to doing the work to foster an equitable environment that celebrates people for all of who they are. Teradata invites all identities and backgrounds in the workplace. We work with deliberation and intent to ensure we are cultivating collaboration and inclusivity across our global organization. We are proud to be an equal opportunity and affirmative action employer. We do not discriminate based upon race, color, ancestry, religion, creed, sex (including pregnancy, childbirth, breastfeeding, or related conditions), national origin, sexual orientation, age, citizenship, marital status, disability, medical condition, genetic information, gender identity or expression, military and veteran status, or any other legally protected status.
Posted 4 days ago
16.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We are looking for a hands-on DevOps SME/Architect with 14–16 years of experience to lead and enhance our DevOps processes for a 24–30 member team. The ideal candidate will identify gaps in current methodologies, build a robust backlog, set standards for monitoring and automation, and provide hands-on support to resolve complex issues. This role requires deep expertise in CI/CD pipelines, containerization, and modern DevOps tools, along with strong leadership to guide the team toward operational excellence. Key Responsibilities: • Gap Analysis & Strategy: Assess current DevOps practices, identify inefficiencies, and propose actionable improvements to optimize workflows. • Backlog Development: Build and prioritize a comprehensive DevOps backlog to address gaps, enhance automation, and improve system reliability. • Hands-On Leadership: Actively participate in troubleshooting, debugging, and resolving critical issues, providing hands-on support when required. • CI/CD Pipeline Management: Design, implement, and optimize CI/CD pipelines using Jenkins and other tools to ensure seamless software delivery. • Containerization & Orchestration: Lead the adoption and management of containerized environments using Docker and Kubernetes, ensuring scalability and reliability. • Automation & Monitoring Standards: Establish best-in-class standards for monitoring, logging, and automation to enhance system performance and uptime. • Scripting & Development: Develop and maintain scripts in Groovy, Python, or other relevant languages to automate processes and improve efficiency. • Team Collaboration: Mentor and guide a team of 24–30 DevOps engineers, fostering a culture of continuous improvement and collaboration. • Stakeholder Engagement: Work closely with development, QA, and operations teams to align DevOps strategies with business objectives. • Innovation: Stay updated on industry trends and introduce innovative tools and practices to keep Clovertex at the forefront of DevOps excellence. Required Skills and Qualifications: • Experience: 14–16 years of hands-on DevOps experience, with a proven track record as a DevOps SME or Architect. • Technical Expertise: • Deep knowledge of Jenkins for CI/CD pipeline development and management. • Proficiency in scripting languages such as Groovy and Python. • Extensive experience with containerization (Docker) and orchestration (Kubernetes). • Strong understanding of monitoring and automation tools to set enterprise-grade standards. • Leadership: Ability to lead and mentor a large team, drive process improvements, and build a prioritized backlog. • Problem-Solving: Strong analytical skills to identify gaps, troubleshoot complex issues, and implement effective solutions. • Hands-On Approach: Willingness to dive into technical challenges and provide hands-on support when needed.
Posted 4 days ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Microsoft Management Level Senior Associate Job Description & Summary At PwC, our people in business application consulting specialise in consulting services for a variety of business applications, helping clients optimise operational efficiency. These individuals analyse client needs, implement software solutions, and provide training and support for seamless integration and utilisation of business applications, enabling clients to achieve their strategic objectives. As a business application consulting generalist at PwC, you will provide consulting services for a wide range of business applications. You will leverage a broad understanding of various software solutions to assist clients in optimising operational efficiency through analysis, implementation, training, and support. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations Job Description & Summary: We are looking for a seasoned Azure DevOps Experienced Candidate Responsibilities Azure Landing Zone using IaC Azure (Compute, Storage, Networking, BCP, Identity, Security, Automation): good grasp on at least 4/7 would be good to proceed with Terraform (State management knowledge is a must, modules, provisioners, built-in functions, deployment through DevOps tools) Containerization (Docker, K8S/AKS): either of them with questions covering identity, network, security, monitoring, backup along with core concepts and K8S architecture DevOps (ADO, Jenkins, GitHub): include questions on yaml based pipelines, approval gates, credential management, stage-job-steps-task hierarchy, job/task orchestration, agent pools Migrations: Knowledge on migrating planning and assessment would be ideal Experience with different caching architectures Knowledge of security compliance frameworks, such as SOC II, PCI, HIPPA, ISO27001 Knowledge of well-known open source tools for monitoring, trending and configuration management Mandatory Skill Sets Azure Infra Design CI CD pipeline Azure Migration Terraform Preferred Skill Sets Azure Infra Design CI CD pipeline Azure Migration Terraform Years of experience Required 4 to 8 Years Education Qualification BE/B.Tech/MBA/MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Engineering, Bachelor of Technology Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Java Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Reasoning, Analytical Thinking, Application Software, Business Data Analytics, Business Management, Business Technology, Business Transformation, Communication, Creativity, Documentation Development, Embracing Change, Emotional Regulation, Empathy, Implementation Research, Implementation Support, Implementing Technology, Inclusion, Intellectual Curiosity, Learning Agility, Optimism, Performance Assessment, Performance Management Software {+ 16 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date
Posted 4 days ago
8.0 years
0 Lacs
India
Remote
Quant Engineer Location: Bangalore(Remote) Fulltime Quant Engineer Job Description: Strong Python developer with up-to-date skills, including web development, cloud (ideally Azure), Docker, testing , devops (ideally terraform + github actions). Data engineering (pyspark, lakehouses, kafka) is a plus. Good understanding of maths, finance as role interacts with quant devs, analysts and traders. Familiarity with e.g. PnL, greeks, volatility, partial derivative, normal distribution etc. Financial and/or trading exposure is nice to have, particularly energy commodities Productionise quant models into software applications, ensuring robust day to day operation, monitoring and back testing are in place Translate trader or quant analyst’s need into software product requirements Prototype and implement data pipelines Co-ordinate closely with analysts and quants during development of models, acting as a technical support and coach Produce accurate, performant, scalable, secure software, and support best practices following defined IT standards Transform proof of concepts into a larger deployable product in Shell and outside. Work in a highly-collaborative, friendly Agile environment, participate in Ceremonies and Continuous Improvement activities. Ensuring that documentation and explanations of results of analysis or modelling are fit for purpose for both a technical and non-technical audience Mentor and coach other teammates who are upskilling in Quants Engineering Professional Qualifications & Skills Educational Qualification Graduation / postgraduation /PhD with 8+ years’ work experience as software developer /data scientist. Degree level in STEM, computer science, engineering, mathematics, or a relevant field of applied mathematics. Good understanding of Trading terminology and concepts (incl. financial derivatives), gained from experience working in a Trading or Finance environment. Required Skills Expert in core Python with Python scientific stack / ecosystem (incl pandas, numpy, scipy, stats), and a second strongly typed language (e.g.: C#, C++, Rust or Java). Expert in application design, security, release, testing and packaging. Mastery of SQL / no-SQL databases, data pipeline orchestration tools. Mastery of concurrent/distributed programming and performance optimisation methods
Posted 4 days ago
3.0 years
0 Lacs
India
Remote
Job Title: AI Engineer – Web Crawling & Field Data Extraction Location: [Remote] Department: Engineering / Data Science Experience Level: Mid to Senior Employment Type: Contract to Hire About the Role: We are looking for a skilled AI Engineer with strong experience in web crawling, data parsing, and AI/ML-driven information extraction to join our team. You will be responsible for developing systems that automatically crawl websites, extract structured and unstructured data, and intelligently map the extracted content to predefined fields for business use. This role combines practical web scraping, NLP techniques, and AI model integration to automate workflows that involve large-scale content ingestion. Key Responsibilities: Design and develop automated web crawlers and scrapers to extract information from various websites and online resources. Implement robust and scalable data extraction pipelines that convert semi-structured/unstructured data into structured field-level data. Use Natural Language Processing (NLP) and ML models to intelligently interpret and map extracted content to specific form fields or schemas. Build systems that can handle dynamic web content, captchas, JavaScript-rendered pages, and anti-bot mechanisms. Collaborate with frontend/backend teams to integrate extracted data into user-facing applications. Monitor crawler performance, ensure compliance with legal/data policies, and manage scheduling, deduplication, and logging. Optimize crawling strategies using AI/heuristics for prioritization, entity recognition, and data validation. Create tools for auto-filling forms or generating structured records from crawled data. Required Skills and Qualifications: Bachelor’s or Master’s degree in Computer Science, AI/ML, Data Science, or related field. 3+ years of hands-on experience with web scraping frameworks (e.g., Scrapy, Puppeteer, Playwright, Selenium). Proficiency in Python, with experience in BeautifulSoup, lxml, requests, aiohttp, or similar libraries. Experience with NLP libraries (e.g., spaCy, NLTK, Hugging Face Transformers) to parse and map extracted data. Familiarity with ML-based data classification, extraction, and field mapping. Knowledge of structured data formats (JSON, XML, CSV) and RESTful APIs. Experience handling anti-scraping techniques and rate-limiting controls. Strong problem-solving skills, clean coding practices, and the ability to work independently. Nice-to-Have Experience with AI form understanding (e.g., LayoutLM, DocAI, OCR). Familiarity with Large Language Models (LLMs) for intelligent data labeling or validation. Exposure to data pipelines, ETL frameworks, or orchestration tools (Airflow, Prefect). Understanding of data privacy, compliance, and ethical crawling standards. Why Join Us? Work on cutting-edge AI applications in real-world automation. Be part of a fast-growing and collaborative team. Opportunity to lead and shape intelligent data ingestion solutions from the ground up.
Posted 4 days ago
10.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About PhonePe Limited: Headquartered in India, its flagship product, the PhonePe digital payments app, was launched in Aug 2016. As of April 2025, PhonePe has over 60 Crore (600 Million) registered users and a digital payments acceptance network spread across over 4 Crore (40+ million) merchants. PhonePe also processes over 33 Crore (330+ Million) transactions daily with an Annualized Total Payment Value (TPV) of over INR 150 lakh crore. PhonePe’s portfolio of businesses includes the distribution of financial products (Insurance, Lending, and Wealth) as well as new consumer tech businesses (Pincode - hyperlocal e-commerce and Indus AppStore Localized App Store for the Android ecosystem) in India, which are aligned with the company’s vision to offer every Indian an equal opportunity to accelerate their progress by unlocking the flow of money and access to services. Culture: At PhonePe, we go the extra mile to make sure you can bring your best self to work, Everyday!. And that starts with creating the right environment for you. We empower people and trust them to do the right thing. Here, you own your work from start to finish, right from day one. PhonePe-rs solve complex problems and execute quickly; often building frameworks from scratch. If you’re excited by the idea of building platforms that touch millions, ideating with some of the best minds in the country and executing on your dreams with purpose and speed, join us! Overview We are seeking a dynamic and results-oriented marketer to spearhead the brand marketing strategy for a high-priority vertical within our organization. As the Marketing Lead you will be the brand champion, responsible for shaping and executing strategies that drive brand awareness, customer acquisition, and overall business growth. This is a high-impact role that requires a blend of strategic thinking, creative flair, and analytical prowess. You will work cross-functionally, collaborating with teams across Product, Business, Finance, Analytics, Tech, and Marketing to achieve shared goals. Responsibilities Leadership & Strategic Vision Be able to work with business heads and peers to understand Business objectives and translate that into marketing strategy for the various cross-functional teams. Be able to influence peers and also rally marketing teams to deliver on a common strategy and goal. Develop a compelling brand vision and strategy that aligns with the overall business objectives. Define the brand's positioning, messaging, and personality, ensuring consistency across all touchpoints. Cultivate and maintain a distinctive and compelling brand voice and personality that resonates with our target audience. Brand Elevation & Campaign Orchestration Elevate brand awareness, salience, and affinity to fuel customer growth and engagement across all touchpoints. Conceptualize and execute impactful 360-degree campaigns that leverage the power of diverse channels, including out-of-home, print, radio, catchment-area marketing, social, digital, owned channels, and PR. Be able to work with digital and mobile advertising teams to amplify awareness and drive adoption. Manage campaigns from initial agency briefs to creative development, media planning, execution, and measurement, ensuring seamless integration and optimal results. Synergy & Optimization Forge strong partnerships with cross-functional teams (offline, customer acquisition, customer engagement, product, PR, legal, finance) and external partners (marketing agencies, 3Ps, affiliates) to achieve shared objectives. Ensure brand consistency and integrity across all marketing assets and touchpoints. Lead brand research and tracking studies in collaboration with Market Research teams and agencies to glean actionable insights. Set, monitor, and optimize campaign goals and performance metrics to maximize ROI. Regularly present updates and progress reports to senior leadership, showcasing the impact of brand initiatives on business outcomes. Financial Stewardship Manage and optimize marketing communication and traditional, digital, and social marketing budgets to ensure maximum efficiency and effectiveness. Oversee and manage the performance of marketing agencies, holding them accountable for delivering exceptional results. Agility & Results Orientation Stay ahead of the curve by keeping abreast of the latest tools, trends, and best practices; test and assess their effectiveness to identify opportunities for innovation. Thrive in a fast-paced, deadline-driven environment, effectively multitasking while maintaining quality and attention to detail. Deliver exceptional results with a strong bias for action in an ambiguous environment, demonstrating a proven ability to navigate complexity and uncertainty. Ideal Candidate Requirements Must Have Demonstrated capabilities in Fintech/consumer tech industries having seen 0-1 scale journeys and a very sharp understanding of business and product and how that has translated into marketing strategies 10+ years of brand building and large-scale campaign/media management experience in consumer tech/financial services/internet brands 6+ years of team building and management experience. Fluency in traditional and digital advertising media, social media, content partnerships & sponsorships. Proven commercial acumen, with experience managing large-scale campaign and media budgets. Customer-centric approach. A strong brand vision and the ability to translate it into an actionable yearly roadmap, rallying internal and external stakeholders to deliver on it. Deep understanding of research and how to leverage insights into creative strategy. Critical Requirement Customer-Centric Approach: A genuine passion for understanding and meeting customer needs. Entrepreneurial Spirit: A self-starter with a bias for action. Ability to take initiative and work independently. Key Skills Brand strategy, brand management, marketing campaign development and execution, digital marketing, mobile marketing, customer acquisition, customer engagement, market research, data analysis, cross-functional leadership, communication, creativity, problem-solving Additional Desirable Skills Experience in the relevant industry vertical, experience with brand tracking and measurement tools, knowledge of marketing automation platforms. PhonePe Full Time Employee Benefits (Not applicable for Intern or Contract Roles) Insurance Benefits - Medical Insurance, Critical Illness Insurance, Accidental Insurance, Life Insurance Wellness Program - Employee Assistance Program, Onsite Medical Center, Emergency Support System Parental Support - Maternity Benefit, Paternity Benefit Program, Adoption Assistance Program, Day-care Support Program Mobility Benefits - Relocation benefits, Transfer Support Policy, Travel Policy Retirement Benefits - Employee PF Contribution, Flexible PF Contribution, Gratuity, NPS, Leave Encashment Other Benefits - Higher Education Assistance, Car Lease, Salary Advance Policy Our inclusive culture promotes individual expression, creativity, innovation, and achievement and in turn helps us better understand and serve our customers. We see ourselves as a place for intellectual curiosity, ideas and debates, where diverse perspectives lead to deeper understanding and better quality results. PhonePe is an equal opportunity employer and is committed to treating all its employees and job applicants equally; regardless of gender, sexual preference, religion, race, color or disability. If you have a disability or special need that requires assistance or reasonable accommodation, during the application and hiring process, including support for the interview or onboarding process, please fill out this form. Read more about PhonePe on our blog . Life at PhonePe PhonePe in the news
Posted 4 days ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
🌟 We're Hiring: Artificial Intelligence (AI) Consultant! 🌟 We are seeking an experienced AI Consultant to drive innovative artificial intelligence solutions and guide organizations through their digital transformation journey. The ideal candidate will have deep expertise in AI technologies, machine learning algorithms, and strategic implementation to deliver impactful business solutions. 📍 Location: Chennai, India ⏰ Work Mode: Flexible office & remote 💼 Role: Artificial Intelligence (AI) Consultant What You'll Do Roles and Responsibilities AI Engagements: Independently manage end-to-end delivery of AI-led transformation projects across industries, ensuring value realization and high client satisfaction. Strategic Consulting & Roadmapping: Identify key enterprise challenges and translate them into AI solution opportunities, crafting transformation roadmaps that leverage RAG, LLMs, and intelligent agent frameworks. LLM/RAG Solution Design & Implementation: Architect and deliver cutting-edge AI systems using Python, LangChain, LlamaIndex, OpenAI function calling, semantic search, and vector store integrations (FAISS, Qdrant, Pinecone, ChromaDB). Agentic Systems: Design and deploy multi-step agent workflows using frameworks like CrewAI, LangGraph, AutoGen or ReAct, optimizing tool-augmented reasoning pipelines. Client Engagement & Advisory: Build lasting client relationships as a trusted AI advisor, delivering technical insight and strategic direction on generative AI initiatives. Hands-on Prototyping: Rapidly prototype PoCs using Python and modern ML/LLM stacks to demonstrate feasibility and business impact. Thought Leadership: Conduct market research, stay updated with the latest in GenAI and RAG/Agentic systems, and contribute to whitepapers, blogs, and new offerings Essential Skills Leadership Quality: Proven track record in leading cross-functional teams and delivering enterprise-grade AI projects with tangible business impact. Business Consulting Mindset: Strong problem-solving, stakeholder communication, and business analysis skills to bridge technical and business domains. Python & AI Proficiency: Advanced proficiency in Python and popular AI/ML libraries (e.g., scikit-learn, PyTorch, TensorFlow, spaCy, NLTK). Solid understanding of NLP, embeddings, semantic search, and transformer models. LLM Ecosystem Fluency: Experience with OpenAI, Cohere, Hugging Face models; prompt engineering; tool/function calling; and structured task orchestration. Independent Contributor: Ability to own initiatives end-to-end, take decisions independently, and operate in fast-paced environments. Education Bachelor's or Master’s in Computer Science, AI, Engineering, or related field. Experience: Minimum 5 years of experience in consulting or technology roles, with at least 3 years focused on AI & ML solutions. Preferred Skills Cloud Platform Expertise: Strong familiarity with Microsoft Azure (preferred), AWS, or GCP — including compute instances, storage, managed services, and serverless/cloud- native deployment models. Programming Paradigms: Hands-on experience with both functional and object-oriented programming in AI system design. Hugging Face Ecosystem: Proficiency in using Hugging Face Transformers, Datasets, and Model Hub. Vector Store Experience: Hands-on experience with FAISS, Qdrant, Pinecone, ChromaDB. LangChain Expertise: Strong proficiency in LangChain for agentic task orchestration and RAG pipelines. MLOps & Deployment: CI/CD for ML pipelines, MLOps tools (MLflow, Azure ML), containerization (Docker/Kubernetes). Cloud & Service Architecture: Knowledge of microservices, scaling strategies, inter- service communication. Programming Languages: Proficiency in Python and C# for enterprise-grade AI solution development. Additional Skills Excellent customer interfacing and stakeholder engagement skills. Strong verbal and written communication in business and technical contexts. High attention to detail and structured problem-solving. Passion for AI ethics, safety, performance, and optimization in enterprise-grade systems.
Posted 4 days ago
10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About Position : We are seeking a seasoned Senior Data Architect with deep expertise in Databricks and Microsoft Fabric. This role involves leading the design and implementation of scalable data solutions for BFSI and HLS clients. Role: Senior Data Architect (Databricks & Microsoft Fabric) Location: All PSL Locations Experience: 10-18 Years Job Type: Full Time Employment What You'll Do: Architect and implement scalable, secure, and high-performance data solutions on the Databricks and Azure Fabric. Lead discovery workshops to understand business challenges, data requirements, and current technology ecosystems. Design end-to-end data pipelines, ensuring seamless integration with enterprise systems leveraging Databricks and Microsoft Fabric Optimize databricks and fabric workloads for performance and cost efficiency Provide solutions considering various architectural concerns e.g. Data Governance, Master Data Management, Meta-Data management, Data Quality Management, data security and privacy policies and procedures Optimize solutions for cost efficiency, performance, and reliability. Lead technical engagements, collaborating with client stakeholders and internal teams. Establish and enforce governance, security, and compliance standards within Databricks and Fabric Guide teams in implementing best practices on Databricks and Microsoft Azure. Keeping abreast with latest developments in the industry; evaluating and recommending new and emerging data architectures/patterns, technologies, and standards Act as a subject matter expert (SME) for Databricks and Azure Fabric within the organization and for clients. Delivering and directing pre-sales engagements to prove functional capabilities (POC’s or POV’s) Develop and deliver workshops, webinars, and technical presentations on Databricks and Fabric capabilities. Create white papers, case studies and reusable artifacts to showcase our company’s Databricks value proposition. Build strong relationships with Databricks partnership teams including their product managers and solution architects, contributing to co-marketing and joint go-to-market strategies. Business Development Support: Collaborate with sales and pre-sales teams to provide technical guidance during RFP responses and solutioning. Identify upsell and cross-sell opportunities within existing accounts by showcasing Databricks’s & BI potential for extended use cases. Expertise You'll Bring: Minimum of 10+ years of experience in data architecture, engineering, or analytics roles, with at least 5 years of hands-on experience with Databricks and 1 year of Azure Fabric Proven track record of designing and implementing large-scale data solutions across industries. Experience working in consulting or client-facing roles, particularly with enterprise customers. Deep understanding of modern data architecture principles, including cloud platforms (AWS, Azure, GCP). Deep expertise in modern data architectures, lakehouse principles, and AI-driven analytics. Strong hands-on experience with Databricks core components, including Delta Lake, Apache Spark, MLflow, Unity Catalog, and Databricks Workflows. Understanding of cloud-native services for data ingestion, transformation, and orchestration (e.g., AWS Glue, Azure Data Factory, GCP Dataflow). Exceptional communication and presentation skills, capable of explaining technical concepts to non-technical stakeholders. Strong interpersonal skills to foster collaboration with diverse teams. A self-starter with a growth mindset and the ability to adapt in a fast-paced environment. Databricks Advanced Certification Databricks Certified Data Engineer Professional Certifications in cloud platforms such as: AWS Certified Data Analytics: Specialty Microsoft Certified: Azure Data Engineer Benefits: Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment: Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. We offer hybrid work options and flexible working hours to accommodate various needs and preferences. Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. If you are a person with disabilities and have specific requirements, please inform us during the application process or at any time during your employment. We are committed to creating an inclusive environment where all employees can thrive. Our company fosters a values-driven and people-centric work environment that enables our employees to: Accelerate growth, both professionally and personally Impact the world in powerful, positive ways, using the latest technologies Enjoy collaborative innovation, with diversity and work-life wellbeing at the core Unlock global opportunities to work and learn with the industry’s best Let’s unleash your full potential at Persistent “Persistent is an Equal Opportunity Employer and prohibits discrimination and harassment of any kind.”
Posted 4 days ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism SAP Management Level Senior Associate Job Description & Summary At PwC, our people in software and product innovation focus on developing cutting-edge software solutions and driving product innovation to meet the evolving needs of clients. These individuals combine technical experience with creative thinking to deliver innovative software products and solutions. Those in software engineering at PwC will focus on developing innovative software solutions to drive digital transformation and enhance business performance. In this field, you will use your knowledge to design, code, and test cutting-edge applications that revolutionise industries and deliver exceptional user experiences. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities Design, develop, and maintain cloud-based applications using Java and full-stack technologies, primarily on AWS, with the flexibility to leverage Azure services when necessary. Implement scalable, reliable, and secure cloud infrastructure to support application deployment and operations. Collaborate with cross-functional teams to define, design, and deliver new features and enhancements. Develop RESTful APIs and integrate with cloud services for seamless interaction between front-end and back-end systems. Ensure high performance and responsiveness of applications through effective optimization and troubleshooting. Automate deployment and monitoring processes using tools like Terraform, AWS CloudFormation, or Azure Resource Manager. Implement security best practices and compliance measures within cloud environments. Stay up-to-date with the latest cloud and development technologies to drive continuous improvement. Qualifications Bachelor’s degree in Computer Science, Information Technology, or a related field, or equivalent experience. 3-8 years of experience in cloud engineering and Java full-stack development, with significant exposure to AWS services and some experience with Azure. Proficiency in Java and related frameworks such as Spring Boot. Experience with front-end technologies such as Angular, React, or Vue.js. Strong expertise in AWS services such as EC2, S3, RDS, Lambda, and CloudWatch. Familiarity with Azure services such as Azure App Services, Azure Functions, and Azure SQL Database. Experience with infrastructure-as-code tools like Terraform, AWS CloudFormation, or Azure Resource Manager. Excellent problem-solving skills and a detail-oriented approach. Strong communication skills and the ability to work collaboratively in a team environment. Preferred Qualifications AWS Certified Solutions Architect or Azure Solutions Architect certification. Experience with containerization and orchestration tools such as Docker and Kubernetes. Knowledge of DevOps practices and CI/CD pipeline tools. Experience working in Agile environments and familiarity with Agile methodologies. Mandatory Skill Sets- Java, react or angular , AWS Preferred Skill Sets- AZURE GCP Years Of Experience Required- 4-6 Years Education Qualifications- Btech MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Java Full Stack Development Optional Skills Acceptance Test Driven Development (ATDD), Acceptance Test Driven Development (ATDD), Accepting Feedback, Active Listening, Analytical Thinking, Android, API Management, Appian (Platform), Application Development, Application Frameworks, Application Lifecycle Management, Application Software, Business Process Improvement, Business Process Management (BPM), Business Requirements Analysis, C#.NET, C++ Programming Language, Client Management, Code Review, Coding Standards, Communication, Computer Engineering, Computer Science, Continuous Integration/Continuous Delivery (CI/CD), Creativity {+ 46 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date
Posted 4 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough