Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 years
4 Lacs
Gurgaon
On-site
Role: Programmatic Ad Operations Manager Location: Gurgaon Experience: 3+ Years CTC: Up to ₹40,000 per month We're looking for an Ad Operations Manager, specializing in OpenRTB and programmatic platforms. This pivotal role focuses on optimizing revenue through strategic management of demand and supply partner integrations, particularly within the Connected TV (CTV) ecosystem. Role Responsibilities: Programmatic Integration: Set up and optimize OpenRTB integrations with DSPs and SSPs. Ad Server Management: Oversee ad serving operations using Limelight or similar platforms. Troubleshooting: Diagnose and resolve ad delivery, latency, and revenue discrepancies. CTV & Header Bidding: Manage VAST, Prebid, and CTV monetization pipelines for efficient ad delivery. Collaboration: Partner with product and engineering teams for seamless ad tech solution implementation. Requirements: Proven expertise in OpenRTB and programmatic platforms. Deep understanding of the CTV ecosystem and header bidding. Experience with demand-side integrations and supply path optimization (SPO). Proficiency with ad servers, tag management systems, and analytics tools. Strong analytical and problem-solving skills. Apply Now! Job Type: Full-time Pay: Up to ₹40,000.00 per month Work Location: In person
Posted 2 days ago
9.0 - 13.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY-Consulting – AI Enabled Automation -GenAI/Agentic – Manager We are looking to hire people with strong AI Enabled Automation skills and who are interested in applying AI in the process automation space – Azure, AI, ML, Deep Learning, NLP, GenAI , large Lang Models(LLM), RAG ,Vector DB , Graph DB, Python. At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture, and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Responsibilities: Development and implementation of AI enabled automation solutions, ensuring alignment with business objectives. Design and deploy Proof of Concepts (POCs) and Points of View (POVs) across various industry verticals, demonstrating the potential of AI enabled automation applications. Ensure seamless integration of optimized solutions into the overall product or system Collaborate with cross-functional teams to understand requirements, to integrate solutions into cloud environments (Azure, GCP, AWS, etc.) and ensure it aligns with business goals and user needs Educate team on best practices and keep updated on the latest tech advancements to bring innovative solutions to the project Technical Skills Requirements 9 to 13 years of relevant professional experience Proficiency in Python and frameworks like PyTorch, TensorFlow, Hugging Face Transformers. Strong foundation in ML algorithms, feature engineering, and model evaluation.- Must Strong foundation in Deep Learning, Neural Networks, RNNs, CNNs, LSTMs, Transformers (BERT, GPT), and NLP.-Must Experience in GenAI technologies — LLMs (GPT, Claude, LLaMA), prompting, fine-tuning. Experience with LangChain, LlamaIndex, LangGraph, AutoGen, or CrewAI.(Agentic Framework) Knowledge of retrieval augmented generation (RAG) Knowledge of Knowledge Graph RAG Experience with multi-agent orchestration, memory, and tool integrations Experience/Implement MLOps practices and tools (CI/CD for ML, containerization, orchestration, model versioning and reproducibility)--Good to have Experience with cloud platforms (AWS, Azure, GCP) for scalable ML model deployment. Good understanding of data pipelines, APIs, and distributed systems. Build observability into AI systems — latency, drift, performance metrics. Strong written and verbal communication, presentation, client service and technical writing skills in English for both technical and business audiences. Strong analytical, problem solving and critical thinking skills. Ability to work under tight timelines for multiple project deliveries. What we offer: At EY GDS, we support you in achieving your unique potential both personally and professionally. We give you stretching and rewarding experiences that keep you motivated, working in an atmosphere of integrity and teaming with some of the world's most successful companies. And while we encourage you to take personal responsibility for your career, we support you in your professional development in every way we can. You enjoy the flexibility to devote time to what matters to you, in your business and personal lives. At EY you can be who you are and express your point of view, energy and enthusiasm, wherever you are in the world. It's how you make a difference. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 2 days ago
7.0 years
8 - 10 Lacs
Gurgaon
On-site
Additional Locations: India-Haryana, Gurgaon Diversity - Innovation - Caring - Global Collaboration - Winning Spirit - High Performance At Boston Scientific, we’ll give you the opportunity to harness all that’s within you by working in teams of diverse and high-performing employees, tackling some of the most important health industry challenges. With access to the latest tools, information and training, we’ll help you in advancing your skills and career. Here, you’ll be supported in progressing – whatever your ambitions. Software Engineer-MLOps We are seeking an enthusiastic and detail-oriented MLOps Engineer to support the development, deployment, and monitoring of machine learning models in production environments. This is a hands-on role ideal for candidates looking to grow their skills at the intersection of data science, software engineering, and DevOps. You will work closely with senior MLOps engineers, data scientists, and software developers to build scalable, reliable, and automated ML workflows across cloud platforms like AWS and Azure Key Responsibilities include: Assist in building and maintaining ML pipelines for data preparation, training, testing, and deployment Support the automation of model lifecycle tasks including versioning, packaging, and monitoring Build and manage ML workloads on AWS (SageMaker Unified studio, Bedrock, EKS, Lambda, S3, Athena) and Azure (Azure ML Foundry, AKS, ADF, Blob Storage) Assist with containerizing ML models using Docker, and deploying using Kubernetes or cloud-native orchestrators Manage infrastructure using IaC tools such as Terraform, Bicep, or CloudFormation Participate in implementing CI/CD pipelines for ML workflows using GitHub Actions, Azure DevOps, or Jenkins Contribute to testing frameworks for ML models and data validation (e.g., pytest, Great Expectations). Ensure robust CI/CD pipelines and infrastructure as code (IaC) using tools like Terraform or CloudFormation Participate in diagnosing issues related to model accuracy, latency, or infrastructure bottlenecks Continuously improve knowledge of MLOps tools, ML frameworks, and cloud practices. Required Qualification: Bachelor's/Master’s in Computer Science, Engineering, or related discipline 7 years in Devops, with 2+ years in MLOps. Good Understanding of MLflow, Airflow, FastAPI, Docker, Kubernetes, and Git. Proficient in Python and familiar with bash scripting Exposure to MLOps platforms or tools such as SageMaker Studio, Azure ML, or GCP Vertex AI. Requisition ID: 610751 As a leader in medical science for more than 40 years, we are committed to solving the challenges that matter most – united by a deep caring for human life. Our mission to advance science for life is about transforming lives through innovative medical solutions that improve patient lives, create value for our customers, and support our employees and the communities in which we operate. Now more than ever, we have a responsibility to apply those values to everything we do – as a global business and as a global corporate citizen. So, choosing a career with Boston Scientific (NYSE: BSX) isn’t just business, it’s personal. And if you’re a natural problem-solver with the imagination, determination, and spirit to make a meaningful difference to people worldwide, we encourage you to apply and look forward to connecting with you!
Posted 2 days ago
3.0 years
1 - 5 Lacs
Chennai
On-site
DESCRIPTION Join the team that brought you the Echo Show, a touch-screen enabled Alexa device that supports video calling, music, weather, and more! Our Echo Software team works on not only the Echo Show but other high-profile consumer electronics products including Fire TV and the Echo family of devices. We’d love to have you join us to bring innovative experiences to millions of customers. As a software engineer in the Alexa Endpoint Experiences team, you will help drive innovation by developing features that present innovative speech-backed visuals on our Echo Show, Echo Spot, and similar screened Alexa devices. Specifically, you will be responsible for developing and maintaining Multimedia experiences, partnering with other Amazon and third-party services and device teams as required. You will ensure that Alexa almost never drops her end of the conversation, delivering services that respond with minimal latency at Amazon scale. You may work on device frameworks, web services, and more. If being at the forefront of new innovation for Alexa sounds exciting, we'd love to talk to you! Key job responsibilities 1. Own the high level design and development for Multimedia experiences for Echo family of Devices (including new Device launches) with Alexa 2. Deliver high quality software through working in a dynamic, team-focused Agile/Scrum environment. 3. Mentor and train young engineers in the team to build quality software 4. Collaborate with UX and Product owners to define customer experience and product direction. A day in the life 1. Coding in C++/Java/Android 2. Code Reviews 3. Design documents - authoring and review 4. Working with partner teams to plan out new features 5. Work Hard, Have Fun and Make History BASIC QUALIFICATIONS 3+ years of non-internship professional software development experience 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience Experience programming with at least one software programming language PREFERRED QUALIFICATIONS 3+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience Bachelor's degree in computer science or equivalent Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Job details IND, TN, Chennai Alexa and Echo Devices Software Development
Posted 2 days ago
0 years
3 - 6 Lacs
India
On-site
Job Summary: We are seeking a highly skilled Java Spring Boot Developer to join our development team. The ideal candidate will be responsible for designing and developing high-volume, low-latency applications for mission-critical systems, delivering high availability and performance. Key Responsibilities: Develop and maintain Java-based web applications using Spring Boot. Design and implement RESTful APIs and microservices. Collaborate with cross-functional teams to define, design, and ship new features. Write well-designed, efficient, and testable code. Participate in code reviews, unit testing, and deployment processes. Troubleshoot and resolve application issues and bugs. Ensure code quality and maintainability using industry-standard practices. Work with DevOps for CI/CD and cloud deployment. Technical Skillset: Strong proficiency in Java (8+) Hands-on experience with Spring Boot Solid understanding and experience in Hibernate / JPA Good knowledge of REST APIs, MySQL, and Microservices architecture is a plus Hands-on experience with Apache Kafka (consumers/producers Job Types: Full-time, Permanent Pay: ₹25,000.00 - ₹50,000.00 per month Benefits: Cell phone reimbursement Health insurance Provident Fund Schedule: Day shift Language: Hindi (Preferred) Work Location: In person
Posted 2 days ago
4.0 years
4 - 8 Lacs
Noida
On-site
Position Overview- We are looking for an experienced AI Engineer to design, build, and optimize AI-powered applications, leveraging both traditional machine learning and large language models (LLMs). The ideal candidate will have a strong foundation in LLM fine-tuning, inference optimization, backend development, and MLOps, with the ability to deploy scalable AI systems in production environments. ShyftLabs is a leading data and AI company, helping enterprises unlock value through AI-driven products and solutions. We specialize in data platforms, machine learning models, and AI-powered automation, offering consulting, prototyping, solution delivery, and platform scaling. Our Fortune 500 clients rely on us to transform their data into actionable insights. Key Responsibilities: Design and implement traditional ML and LLM-based systems and applications. Optimize model inference for performance and cost-efficiency. Fine-tune foundation models using methods like LoRA, QLoRA, and adapter layers. Develop and apply prompt engineering strategies including few-shot learning, chain-of-thought, and RAG. Build robust backend infrastructure to support AI-driven applications. Implement and manage MLOps pipelines for full AI lifecycle management. Design systems for continuous monitoring and evaluation of ML and LLM models. Create automated testing frameworks to ensure model quality and performance. Basic Qualifications: Bachelor’s degree in Computer Science, AI, Data Science, or a related field. 4+ years of experience in AI/ML engineering, software development, or data-driven solutions. LLM Expertise Experience with parameter-efficient fine-tuning (LoRA, QLoRA, adapter layers). Understanding of inference optimization techniques: quantization, pruning, caching, and serving. Skilled in prompt engineering and design, including RAG techniques. Familiarity with AI evaluation frameworks and metrics. Experience designing automated evaluation and continuous monitoring systems. Backend Engineering Strong proficiency in Python and frameworks like FastAPI or Flask. Experience building RESTful APIs and real-time systems. Knowledge of vector databases and traditional databases. Hands-on experience with cloud platforms (AWS, GCP, Azure) focusing on ML services. MLOps & Infrastructure Familiarity with model serving tools (vLLM, SGLang, TensorRT). Experience with Docker and Kubernetes for deploying ML workloads. Ability to build monitoring systems for performance tracking and alerting. Experience building evaluation systems using custom metrics and benchmarks. Proficient in CI/CD and automated deployment pipelines. Experience with orchestration tools like Airflow. Hands-on experience with LLM frameworks (Transformers, LangChain, LlamaIndex). Familiarity with LLM-specific monitoring tools and general ML monitoring systems. Experience with distributed training and inference on multi-GPU environments. Knowledge of model compression techniques like distillation and quantization. Experience deploying models for high-throughput, low-latency production use. Research background or strong awareness of the latest developments in LLMs. Tools & Technologies We Use Frameworks: PyTorch, TensorFlow, Hugging Face Transformers Serving: vLLM, TensorRT-LLM, SGlang, OpenAI API Infrastructure: Docker, Kubernetes, AWS, GCP Databases: PostgreSQL, Redis, Vector Databases We are proud to offer a competitive salary alongside a strong healthcare insurance and benefits package. We pride ourselves on the growth of our employees, offering extensive learning and development resources.
Posted 2 days ago
3.0 years
20 - 29 Lacs
Ahmedabad
Remote
Full Time Ahmedabad/GiftCity The Site Reliability Engineer (SRE) position is a software development-oriented role, focusing heavily on coding, automation, and ensuring the stability and reliability of our global platform. The ideal candidate will primarily be a skilled software developer capable of participating in on-call rotations. The SRE team develops sophisticated telemetry and automation tools, proactively monitoring platform health and executing automated corrective actions. As guardians of the production environment, the SRE team leverages advanced telemetry to anticipate and mitigate issues, ensuring continuous platform stability. What Will You Be Involved With? Develop and maintain advanced telemetry and automation tools for monitoring and managing global platform health. Actively participate in on-call rotations, swiftly diagnosing and resolving system issues and escalations from the customer support team (this is not a customer-facing role). Implement automated solutions for incident response, system optimization, and reliability improvement. Proactively identify potential system stability risks and implement preventive measures. What Will You Bring to the Table? Software Development: 3+ years of professional Python development experience. Strong grasp of Python object-oriented programming concepts and inheritance. Experience developing multi-threaded Python applications. 2+ years of experience using Terraform, with proficiency in creating modules and submodules from scratch. Proficiency or willingness to learn Golang. Operating Systems: Experience with Linux operating systems. Strong understanding of monitoring critical system health parameters. Cloud: 3+ years of hands-on experience with AWS services including EC2, Lambda, CloudWatch, EKS, ELB, RDS, DynamoDB, and SQS. AWS Associate-level certification or higher preferred. Networking: Basic understanding of network protocols: TCP/IP DNS HTTP Load balancing concepts Additional Qualifications (Preferred): Familiarity with trading systems and low-latency environments is advantageous but not required. What We Bring to the Table Compensation: ₹2,000,000 – ₹2,980,801 / year We offer a comprehensive benefits package designed to support your well-being, growth, and work-life balance. Health & Financial Security: Medical, Dental, and Vision coverage Group Life (GTL) and Group Income Protection (GIP) schemes Pension contributions Time Off & Flexibility: Enjoy the best of both worlds: the energy and collaboration of in-person work, combined with the convenience and focus of remote days. This is a hybrid position requiring three days of in-office collaboration per week, with the flexibility to work remotely for the remaining two days. Our hybrid model is designed to balance individual flexibility with the benefits of in-person collaboration, enhanced team cohesion, spontaneous innovation, hands-on mentorship opportunities and strengthens our company culture. 25 days of Paid Time Off (PTO) per year, with the option to roll over unused days. One dedicated day per year for volunteering. Two professional development days per year to allow uninterrupted professional development. An additional PTO day added during milestone anniversary years. Robust paid holiday schedule with early dismissal. Generous parental leave for all parents (including adoptive parents). Work-Life Support & Resources: Budget for tech accessories, including monitors, headphones, keyboards, and other office equipment. Milestone anniversary bonuses. Wellness & Lifestyle Perks: Subsidy contributions toward gym memberships and health/wellness initiatives (including discounted healthcare premiums, healthy meal delivery programs, or smoking cessation support). Our Culture: Forward-thinking, culture-based organization with collaborative teams that promote diversity and inclusion. Trading Technologies is a Software-as-a-Service (SaaS) technology platform provider to the global capital markets industry. The company’s award-winning TT® platform connects to the world’s major international exchanges and liquidity venues in listed derivatives alongside a growing number of asset classes, including fixed income and cryptocurrencies. The TT platform delivers advanced tools for trade execution and order management, market data solutions, analytics, trade surveillance, risk management, and infrastructure services to the world’s leading sell-side institutions, buy-side firms, and exchanges. The company’s blue-chip client base includes Tier 1 banks as well as brokers, money managers, hedge funds, proprietary traders, Commodity Trading Advisors (CTAs), commercial hedgers, and risk managers. These firms rely on the TT ecosystem to manage their end-to-end trading operations. In addition, exchanges utilize TT’s technology to deliver innovative solutions to their market participants. TT also strategically partners with technology companies to make their complementary offerings available to Trading Technologies’ global client base through the TT ecosystem. Trading Technologies (TT) is an equal-opportunity employer. Equal employment has been, and continues to be, a required practice at the Company. Trading Technologies’ practice of equal employment opportunity is to recruit, hire, train, promote, and base all employment decisions on ability rather than race, color, religion, national origin, sex/gender orientation, age, disability, sexual orientation, genetic information or any other protected status. Additionally, TT participates in the E-Verify Program for US offices.
Posted 2 days ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Velotio Technologies is a product engineering company working with innovative startups and enterprises. We are a certified Great Place to Work® and recognized as one of the best companies to work for in India. We have provided full-stack product development for 325+ startups across the globe building products in the cloud-native, data engineering, B2B SaaS, IoT & Machine Learning space. Our team of 400+ elite software engineers solves hard technical problems while transforming customer ideas into successful products. We are seeking a highly motivated Quality Assurance (QA) Engineer to join our team and play a critical role in ensuring the quality, performance, and reliability of our product. As a QA Engineer, you will be responsible for testing complex data pipelines, distributed systems, and real-time processing modules that form the backbone of our platform. You will collaborate closely with developers, product managers, and other stakeholders to deliver a robust and scalable product that meets the highest quality standards. Requirements Analyze technical and functional specifications of the Data Highway product to create comprehensive test strategies Develop detailed test plans, test cases, and test scripts for functional, performance, and regression testing Define testing criteria and acceptance standards for data pipelines, APIs, and distributed systems Execute manual and automated tests for various components of the Data Highway, including data ingestion, processing, and output modules Perform end-to-end testing of data pipelines to ensure accuracy, integrity, and scalability.Validate real-time and batch data processing flows to ensure performance and reliability Identify, document, and track defects using tools like JIRA, providing clear and actionable descriptions for developers Collaborate with development teams to debug issues, verify fixes, and prevent regression Perform root cause analysis to identify underlying problems and recommend process improvements Conduct performance testing to evaluate system behavior under various load conditions, including peak usage scenarios Monitor key metrics such as throughput, latency, and resource utilization to identify bottlenecks and areas for optimization Test APIs for functionality, reliability, and adherence to RESTful principles Validate integrations with external systems and third-party services to ensure seamless data flow Work closely with cross-functional teams, including developers, product managers, and DevOps, to align on requirements and testing priorities Participate in Agile ceremonies such as sprint planning, daily stand-ups, and retrospectives to ensure smooth communication and collaboration Provide regular updates on test progress, coverage, and quality metrics to stakeholders Collaborate with automation engineers to identify critical test cases for automation Use testing tools like Postman, JMeter, and Selenium for API, performance, and UI testing as required Assist in maintaining and improving automated test frameworks for the Data Highway product Validate data transformations, mappings, and consistency across data pipelines Ensure the security of data in transit and at rest, testing for vulnerabilities and compliance with industry standards Maintain detailed and up-to-date documentation for test plans, test cases, and defect reports Contribute to user guides and knowledge bases to support product usage and troubleshooting Desired Skills & Experience: Bachelor's degree in Computer Science, Information Technology, or a related field, or equivalent professional experience 3+ years of experience as a Quality Assurance Engineer, preferably in testing data pipelines, distributed systems, or SaaS products Strong understanding of data pipelines, ETL processes, and distributed systems testing Experience with test management and defect-tracking tools like JIRA, TestRail, Zephyr Proficiency in API testing using tools like Postman or SoapUI Familiarity with SQL and database testing for data validation and consistency Knowledge of performance testing tools like JMeter, LoadRunner, or similar Experience with real-time data processing systems like Kafka or similar technologies Familiarity with CI/CD pipelines and DevOps practices Exposure to automation frameworks and scripting languages such as Python or JavaScript Strong analytical and problem-solving skills with attention to detail Excellent communication and collaboration skills to work effectively with cross-functional teams Proactive and self-driven approach to identifying and resolving quality issues Benefits Our Culture : We have an autonomous and empowered work culture encouraging individuals to take ownership and grow quickly Flat hierarchy with fast decision making and a startup-oriented "get things done" culture A strong, fun & positive environment with regular celebrations of our success. We pride ourselves in creating an inclusive, diverse & authentic environment We want to hire smart, curious, and ambitious folks, so please reach out even if you do not have all of the requisite experience. We are looking for engineers with the potential to grow! At Velotio, we embrace diversity. Inclusion is a priority for us, and we are eager to foster an environment where everyone feels valued. We welcome applications regardless of ethnicity or cultural background, age, gender, nationality, religion, disability or sexual orientation.
Posted 2 days ago
9.0 - 13.0 years
0 Lacs
Kanayannur, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY-Consulting – AI Enabled Automation -GenAI/Agentic – Manager We are looking to hire people with strong AI Enabled Automation skills and who are interested in applying AI in the process automation space – Azure, AI, ML, Deep Learning, NLP, GenAI , large Lang Models(LLM), RAG ,Vector DB , Graph DB, Python. At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture, and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Responsibilities: Development and implementation of AI enabled automation solutions, ensuring alignment with business objectives. Design and deploy Proof of Concepts (POCs) and Points of View (POVs) across various industry verticals, demonstrating the potential of AI enabled automation applications. Ensure seamless integration of optimized solutions into the overall product or system Collaborate with cross-functional teams to understand requirements, to integrate solutions into cloud environments (Azure, GCP, AWS, etc.) and ensure it aligns with business goals and user needs Educate team on best practices and keep updated on the latest tech advancements to bring innovative solutions to the project Technical Skills Requirements 9 to 13 years of relevant professional experience Proficiency in Python and frameworks like PyTorch, TensorFlow, Hugging Face Transformers. Strong foundation in ML algorithms, feature engineering, and model evaluation.- Must Strong foundation in Deep Learning, Neural Networks, RNNs, CNNs, LSTMs, Transformers (BERT, GPT), and NLP.-Must Experience in GenAI technologies — LLMs (GPT, Claude, LLaMA), prompting, fine-tuning. Experience with LangChain, LlamaIndex, LangGraph, AutoGen, or CrewAI.(Agentic Framework) Knowledge of retrieval augmented generation (RAG) Knowledge of Knowledge Graph RAG Experience with multi-agent orchestration, memory, and tool integrations Experience/Implement MLOps practices and tools (CI/CD for ML, containerization, orchestration, model versioning and reproducibility)--Good to have Experience with cloud platforms (AWS, Azure, GCP) for scalable ML model deployment. Good understanding of data pipelines, APIs, and distributed systems. Build observability into AI systems — latency, drift, performance metrics. Strong written and verbal communication, presentation, client service and technical writing skills in English for both technical and business audiences. Strong analytical, problem solving and critical thinking skills. Ability to work under tight timelines for multiple project deliveries. What we offer: At EY GDS, we support you in achieving your unique potential both personally and professionally. We give you stretching and rewarding experiences that keep you motivated, working in an atmosphere of integrity and teaming with some of the world's most successful companies. And while we encourage you to take personal responsibility for your career, we support you in your professional development in every way we can. You enjoy the flexibility to devote time to what matters to you, in your business and personal lives. At EY you can be who you are and express your point of view, energy and enthusiasm, wherever you are in the world. It's how you make a difference. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 2 days ago
9.0 - 13.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY-Consulting – AI Enabled Automation -GenAI/Agentic – Manager We are looking to hire people with strong AI Enabled Automation skills and who are interested in applying AI in the process automation space – Azure, AI, ML, Deep Learning, NLP, GenAI , large Lang Models(LLM), RAG ,Vector DB , Graph DB, Python. At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture, and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Responsibilities: Development and implementation of AI enabled automation solutions, ensuring alignment with business objectives. Design and deploy Proof of Concepts (POCs) and Points of View (POVs) across various industry verticals, demonstrating the potential of AI enabled automation applications. Ensure seamless integration of optimized solutions into the overall product or system Collaborate with cross-functional teams to understand requirements, to integrate solutions into cloud environments (Azure, GCP, AWS, etc.) and ensure it aligns with business goals and user needs Educate team on best practices and keep updated on the latest tech advancements to bring innovative solutions to the project Technical Skills Requirements 9 to 13 years of relevant professional experience Proficiency in Python and frameworks like PyTorch, TensorFlow, Hugging Face Transformers. Strong foundation in ML algorithms, feature engineering, and model evaluation.- Must Strong foundation in Deep Learning, Neural Networks, RNNs, CNNs, LSTMs, Transformers (BERT, GPT), and NLP.-Must Experience in GenAI technologies — LLMs (GPT, Claude, LLaMA), prompting, fine-tuning. Experience with LangChain, LlamaIndex, LangGraph, AutoGen, or CrewAI.(Agentic Framework) Knowledge of retrieval augmented generation (RAG) Knowledge of Knowledge Graph RAG Experience with multi-agent orchestration, memory, and tool integrations Experience/Implement MLOps practices and tools (CI/CD for ML, containerization, orchestration, model versioning and reproducibility)--Good to have Experience with cloud platforms (AWS, Azure, GCP) for scalable ML model deployment. Good understanding of data pipelines, APIs, and distributed systems. Build observability into AI systems — latency, drift, performance metrics. Strong written and verbal communication, presentation, client service and technical writing skills in English for both technical and business audiences. Strong analytical, problem solving and critical thinking skills. Ability to work under tight timelines for multiple project deliveries. What we offer: At EY GDS, we support you in achieving your unique potential both personally and professionally. We give you stretching and rewarding experiences that keep you motivated, working in an atmosphere of integrity and teaming with some of the world's most successful companies. And while we encourage you to take personal responsibility for your career, we support you in your professional development in every way we can. You enjoy the flexibility to devote time to what matters to you, in your business and personal lives. At EY you can be who you are and express your point of view, energy and enthusiasm, wherever you are in the world. It's how you make a difference. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 2 days ago
0 years
0 Lacs
Lucknow, Uttar Pradesh, India
On-site
Company: Indian / Global Digital Organization Key Skills: Java, Micro Services, Distributed Systems Roles and Responsibilities: Design and implement backend services that manage high-throughput and low-latency workloads. Architect secure and observable APIs and data services ensuring 99.99% availability. Lead integration with external platforms such as Google, Meta, and TikTok, ensuring consistent data synchronization. Drive platform observability and operational excellence through metrics, tracing, and alerting frameworks. Mentor junior engineers and contribute to system-level design and code reviews. Collaborate cross-functionally to deliver features involving machine learning, analytics, and optimization engines. Utilize expertise in backend development within distributed, scalable systems. Work with technologies including Kafka, PostgreSQL, ClickHouse, Redis, S3, and object storage-based designs. Apply SOLID principles, clean code practices, and maintain awareness of infrastructure costs and FinOps. Set up unit/integration tests, CI/CD pipelines, and rollback strategies. Skills Required: Strong experience with Java and Microservices architecture Knowledge of distributed systems and high-performance backend services Familiarity with technologies like Kafka, PostgreSQL, ClickHouse, Redis, and S3 Solid understanding of API development, CI/CD pipelines, and observability tools Practice of clean code, SOLID principles, and cost-aware infrastructure planning Education: B.Tech, M.Tech (Dual), M.Tech, MCA, M.Sc., M.E., CA in Computer Engineering, Computer Science Engineering, or Computer Technology.
Posted 2 days ago
2.0 years
0 Lacs
Ahmedabad, Gujarat, India
Remote
At SmartBear, we believe building great software starts with quality—and we’re helping our customers make that happen every day. Our solution hubs—SmartBear API Hub, SmartBear Insight Hub, and SmartBear Test Hub, featuring HaloAI, bring visibility and automation to software development, making it easier for teams to deliver high-quality software faster. SmartBear is trusted by over 16 million developers, testers, and software engineers at 32,000+ organizations – including innovators like Adobe, JetBlue, FedEx, and Microsoft. Associate Software Engineer – JAVA QMetry Test Management for Jira Solve challenging business problems and build highly scalable applications Design, document and implement new systems in Java 17/21 Develop backend services and REST APIs using Java, Spring Boot, and JSON Product intro: QMetry is undergoing a transformation to better align our products to the end users’ requirements while maintaining our market leading position and strong brand reputation across the Test Management Vertical. Go to our product page if you want to know more about QMetry Test Management for Jira | Smartbear. You can even have a free trial to check it out 😊 About the role: As an Associate Software Engineer, you will be integral part of this transformation and will be solving challenging business problems and build highly scalable and available applications that provide excellent user experience. Reporting into the Lead Engineer you will be required to develop solutions using available tools and technologies and assist the engineering team in problem resolution by hands-on participation, effectively communicate status, issues, and risks in a precise and timely manner. You will write code as per product requirements and create new products, create automated tests, contribute in system testing, follow agile mode of development. You will interact with both business and technical stakeholders to deliver high quality products and services that meet business requirements and expectations while applying the latest available tools and technology. Develop scalable real-time low-latency data egress/ingress solutions in an agile delivery method, create automated tests, contribute in system testing, follow agile mode of development. We are looking for someone who can design, document, and implement new systems, as well as enhancements and modifications to existing software with code that complies with design specifications and meets security and Java best practices. Have 2-4 years of experience with hands on experience working in Java 17 platform or higher and hold a Bachelor’s Degree in Computer Science, Computer Engineering or related technical field required. API - driven development - Experience working with remote data via REST and JSON and in delivering high value projects in Agile (SCRUM) methodology using preferably JIRA tool. Good Understanding of OOPs, Java, Spring Framework and JPA. Experience with Applications Performance Tuning, Scaling, Security, Resiliency Best Practices. Experience with Relational databases such as MySQL, PostgreSQL, MSSQL, Oracle. Experience with AWS EC2, RDS, S3, Redis, Docker, GitHub, SSDLC, Agile methodologies and development experience in a SCRUM environment. Experience with Atlassian suite of Products and the related ecosystem of Plugins is a plus. Why you should join the SmartBear crew: You can grow your career at every level. We invest in your success as well as the spaces where our teams come together to work, collaborate, and have fun. We love celebrating our SmartBears; we even encourage our crew to take their birthdays off. We are guided by a People and Culture organization - an important distinction for us. We think about our team holistically – the whole person. We celebrate our differences in experiences, viewpoints, and identities because we know it leads to better outcomes. Did you know: Our main goal at SmartBear is to make our technology-driven world a better place. SmartBear is committed to ethical corporate practices and social responsibility, promoting good in all the communities we serve. SmartBear is headquartered in Somerville, MA with offices across the world including Galway Ireland, Bath, UK, Wroclaw, Poland, Ahmedabad and Bangalore India. We’ve won major industry (product and company) awards including B2B Innovators Award, Content Marketing Association, IntellyX Digital Innovator and Built-in Best Places to Work. SmartBear is an equal employment opportunity employer and encourages success based on our individual merits and abilities without regard to race, color, religion, gender, national origin, ancestry, mental or physical disability, marital status, military or veteran status, citizenship status, age, sexual orientation, gender identity or expression, genetic information, medical condition, sex, sex stereotyping, pregnancy (which includes pregnancy, childbirth, and medical conditions related to pregnancy, childbirth, or breastfeeding), or any other legally protected status.
Posted 2 days ago
8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Title: Senior Python Developer Company: Darwix AI Location: Gurgaon (On-site) Type: Full-Time Experience: 4–8 years 🧠 About Darwix AI Darwix AI is one of India’s fastest-growing AI startups, transforming enterprise sales with our GenAI-powered conversational intelligence and real-time agent assist suite. Our platform is used by high-growth enterprises across India, MENA, and Southeast Asia to improve sales productivity, personalize customer conversations, and unlock revenue intelligence in real-time. We are backed by marquee VCs, 30+ angel investors, and led by alumni from IITs, IIMs, and BITS with deep experience in building and scaling products from India for the world. 🎯 Role Overview As a Senior Python Developer at Darwix AI, you will be at the core of our engineering team, leading the development of scalable, secure, and high-performance backend systems that support AI workflows, real-time data processing, and enterprise-grade integrations. This role requires deep technical expertise in Python, a strong foundation in backend architecture, and the ability to collaborate closely with AI, product, and infrastructure teams. You will take ownership of critical backend modules and shape the engineering culture in a rapidly evolving, high-impact environment. 🔧 Key Responsibilities🔹 System Architecture & API Development Design, implement, and optimize backend services and microservices using Python frameworks such as FastAPI, Django, or Flask Lead the development of scalable RESTful APIs that integrate with frontend, mobile, and AI systems Architect low-latency, fault-tolerant services supporting real-time sales analytics and AI inference 🔹 Data Pipelines & Integrations Build and optimize ETL pipelines to manage structured and unstructured data from internal and third-party sources Integrate APIs with CRMs, telephony systems, transcription engines, and enterprise platforms like Salesforce, Zoho, and LeadSquared Lead scraping and data ingestion efforts from large-scale, dynamic web sources using Playwright, BeautifulSoup, or Scrapy 🔹 AI/ML Enablement Work closely with AI engineers to build infrastructure for LLM/RAG pipelines , vector DBs , and real-time AI decisioning Implement backend support for prompt orchestration , Langchain flows , and function-calling interfaces Support model deployment, inference APIs, and logging/monitoring for large-scale GenAI pipelines 🔹 Database & Storage Design Optimize database design and queries using MySQL , PostgreSQL , and MongoDB Architect and manage Redis and Kafka for caching, queueing, and real-time communication 🔹 DevOps & Quality Ensure continuous delivery through version control (Git), CI/CD pipelines, testing frameworks, and Docker-based deployments Identify and resolve bottlenecks related to performance, memory, or data throughput Adhere to best practices in code quality, testing, security, and documentation 🔹 Leadership & Collaboration Mentor junior developers and participate in code reviews Collaborate cross-functionally with product, AI, design, and sales engineering teams Contribute to architectural decisions, roadmap planning, and scaling strategies ✅ Qualifications 4–8 years of backend development experience in Python, with a deep understanding of object-oriented and functional programming Hands-on experience with FastAPI , Django , or Flask in production environments Proven experience building scalable microservices, data pipelines, and backend systems that support live applications Strong command over REST API architecture , database optimization, and data modeling Solid experience working with web scraping tools , automation frameworks, and external API integrations Knowledge of AI tools like Langchain , HuggingFace , Vector DBs (Pinecone, Weaviate, FAISS) , or RAG architectures is a strong plus Familiarity with cloud infrastructure (AWS/GCP) , Docker, and containerized deployments Comfortable working in fast-paced, high-ownership environments with shifting priorities and dynamic problem-solving 🌟 Bonus Prior experience in an early-stage SaaS startup or AI-first product environment Contributions to open-source Python projects or developer communities Experience working with real-time streaming systems (Kafka, Redis Streams, WebSockets) 💰 What We Offer Competitive fixed salary + performance-linked incentives Equity options for high-impact performers Opportunity to work on cutting-edge GenAI and SaaS products used by global enterprises Autonomy, rapid decision-making, and direct interaction with founders and senior leadership High-growth environment with clear progression toward Tech Lead or Engineering Manager roles Access to tools, compute, and learning resources to accelerate your technical and leadership growth 📩 To Apply Send your resume and GitHub/portfolio (if applicable) to people@darwix.ai Subject Line: Senior Python Developer – [Your Name] Darwix AI Built from India for the World | GenAI for Revenue Teams
Posted 2 days ago
10.0 years
0 Lacs
India
Remote
Job description #hiring #Senior Backend Developer Min Experience: 10+ Years Location: Remote We are seeking a highly experienced Technical Lead with over 10 years of experience, including at least 2 years in a leadership role, to guide and mentor a dynamic engineering team. This role is critical to designing, developing, and optimizing high-performance, scalable, and reliable backend systems. The ideal candidate will have deep expertise in Python (Flask), AWS (Lambda, Redshift, Glue, S3), Microservices, and Database Optimization (SQL, RDBMS). We operate in a high-performance environment, comparable to leading product companies, where uptime, defect reduction, and data clarity are paramount. As a Technical Lead, you will ensure engineering excellence, maintain high-quality standards, and drive innovation in software architecture and development. Key Responsibilities: · Own backend architecture and lead the development of scalable, efficient web applications and microservices. · Ensure production-grade AWS deployment and maintenance with high availability, cost optimization, and security best practices. · Design and optimize databases (RDBMS, SQL) for performance, scalability, and reliability. · Lead API and microservices development, ensuring seamless integration, scalability, and maintainability. · Implement high-performance solutions, emphasizing low latency, uptime, and data accuracy. · Mentor and guide developers, fostering a culture of collaboration, disciplined coding, and technical excellence. · Conduct technical reviews, enforce best coding practices, and ensure adherence to security and compliance standards. · Drive automation and CI/CD pipelines to enhance deployment efficiency and reduce operational overhead. · Communicate technical concepts effectively to technical and non-technical stakeholders. · Provide accurate work estimations and align development efforts with broader business objectives. Key Skills: Programming: Strong expertise in Python (Flask) and Celery. AWS: Core experience with Lambda, Redshift, Glue, S3, and production-level deployment strategies. Microservices & API Development: Deep understanding of architecture, service discovery, API gateway design, observability, and distributed systems best practices. Database Optimization: Expertise in SQL, PostgreSQL, Amazon Aurora RDS, and performance tuning. CI/CD & Infrastructure: Experience with GitHub Actions, GitLab CI/CD, Docker, Kubernetes, Terraform, and CloudFormation. Monitoring & Logging: Familiarity with AWS CloudWatch, ELK Stack, and Prometheus. Security & Compliance: Knowledge of backend security best practices and performance optimization. Collaboration & Communication: Ability to articulate complex technical concepts to international stakeholders and work seamlessly in Agile/Scrum environments. 📩 Apply now or refer someone great. Please share your updated resume to hr.team@kpitechservices.com #PythonJob #jobs #BackendDeveloper
Posted 2 days ago
7.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Additional Locations: India-Haryana, Gurgaon Diversity - Innovation - Caring - Global Collaboration - Winning Spirit - High Performance At Boston Scientific, we’ll give you the opportunity to harness all that’s within you by working in teams of diverse and high-performing employees, tackling some of the most important health industry challenges. With access to the latest tools, information and training, we’ll help you in advancing your skills and career. Here, you’ll be supported in progressing – whatever your ambitions. Software Engineer-MLOps We are seeking an enthusiastic and detail-oriented MLOps Engineer to support the development, deployment, and monitoring of machine learning models in production environments. This is a hands-on role ideal for candidates looking to grow their skills at the intersection of data science, software engineering, and DevOps. You will work closely with senior MLOps engineers, data scientists, and software developers to build scalable, reliable, and automated ML workflows across cloud platforms like AWS and Azure Key Responsibilities Include Assist in building and maintaining ML pipelines for data preparation, training, testing, and deployment Support the automation of model lifecycle tasks including versioning, packaging, and monitoring Build and manage ML workloads on AWS (SageMaker Unified studio, Bedrock, EKS, Lambda, S3, Athena) and Azure (Azure ML Foundry, AKS, ADF, Blob Storage) Assist with containerizing ML models using Docker, and deploying using Kubernetes or cloud-native orchestrators Manage infrastructure using IaC tools such as Terraform, Bicep, or CloudFormation Participate in implementing CI/CD pipelines for ML workflows using GitHub Actions, Azure DevOps, or Jenkins Contribute to testing frameworks for ML models and data validation (e.g., pytest, Great Expectations). Ensure robust CI/CD pipelines and infrastructure as code (IaC) using tools like Terraform or CloudFormation Participate in diagnosing issues related to model accuracy, latency, or infrastructure bottlenecks Continuously improve knowledge of MLOps tools, ML frameworks, and cloud practices. Required Qualification Bachelor's/Master’s in Computer Science, Engineering, or related discipline 7 years in Devops, with 2+ years in MLOps. Good Understanding of MLflow, Airflow, FastAPI, Docker, Kubernetes, and Git. Proficient in Python and familiar with bash scripting Exposure to MLOps platforms or tools such as SageMaker Studio, Azure ML, or GCP Vertex AI. Requisition ID: 610751 As a leader in medical science for more than 40 years, we are committed to solving the challenges that matter most – united by a deep caring for human life. Our mission to advance science for life is about transforming lives through innovative medical solutions that improve patient lives, create value for our customers, and support our employees and the communities in which we operate. Now more than ever, we have a responsibility to apply those values to everything we do – as a global business and as a global corporate citizen. So, choosing a career with Boston Scientific (NYSE: BSX) isn’t just business, it’s personal. And if you’re a natural problem-solver with the imagination, determination, and spirit to make a meaningful difference to people worldwide, we encourage you to apply and look forward to connecting with you!
Posted 2 days ago
15.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Company: Qualcomm India Private Limited Job Area: Engineering Group, Engineering Group > Hardware Engineering General Summary: General Summary: Join Qualcomm’s cutting-edge hardware engineering team to drive the design verification of next-generation SoCs, with a focus on wireless technologies including WLAN (IEEE 802.11). You will work on IP and subsystem-level verification, collaborating with cross-functional teams to deliver high-performance, low-power silicon solutions. A strong understanding of on-chip buses and bridges is essential to ensure seamless integration and performance across subsystems. Key Responsibilities: Develop and execute verification plans for complex SoC designs and IP blocks. Architect and implement testbenches using SystemVerilog and UVM/OVM methodologies. Perform RTL verification, simulation, and debugging. Collaborate with design, architecture, and software teams to ensure functional correctness. Contribute to IP design reviews and sign-off processes. Support post-silicon validation and bring-up activities. Analyze and verify interconnects, buses (e.g., AMBA AXI/AHB/APB), and bridges for performance and protocol compliance. Conduct CPU subsystem verification including coherency, cache behavior, and interrupt handling. Perform power-aware verification using UPF/CPF and validate low-power design intent. Execute performance verification to ensure bandwidth, latency, and throughput targets are met. Preferred Skills & Experience: 2–15 years of experience in digital design and verification. Deep understanding of bus protocols and bridge logic, including hands-on experience with AXI, AHB, and APB. Experience with CPU subsystem verification and performance modeling. Familiarity with wireless protocols (IEEE 802.11 a/b/g/n/ac/ax/be) is a plus. Proficiency in SystemVerilog, UVM/OVM, Verilog, and scripting languages (Perl, Tcl, Python). Experience with power-aware verification methodologies and tools (e.g., UPF, CPF). Familiarity with performance verification techniques and metrics. Exposure to tools like Clearcase/Perforce and simulation/debug environments. Strong analytical, debugging, and communication skills. Minimum Qualifications: Bachelor's degree in Computer Science, Electrical/Electronics Engineering, Engineering, or related field and 2+ years of Hardware Engineering or related work experience. OR Master's degree in Computer Science, Electrical/Electronics Engineering, Engineering, or related field and 1+ year of Hardware Engineering or related work experience. OR PhD in Computer Science, Electrical/Electronics Engineering, Engineering, or related field. Minimum Qualifications: Bachelor’s or Master’s degree in Electrical/Electronics Engineering, Computer Science, or related field. Relevant experience in hardware design and verification. Applicants : Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail disability-accomodations@qualcomm.com or call Qualcomm's toll-free number found here. Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. (Keep in mind that this email address is used to provide reasonable accommodations for individuals with disabilities. We will not respond here to requests for updates on applications or resume inquiries). Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies : Our Careers Site is only for individuals seeking a job at Qualcomm. Staffing and recruiting agencies and individuals being represented by an agency are not authorized to use this site or to submit profiles, applications or resumes, and any such submissions will be considered unsolicited. Qualcomm does not accept unsolicited resumes or applications from agencies. Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers.
Posted 2 days ago
10.0 years
0 Lacs
Ahmedabad, Gujarat, India
Remote
Company Description Zaptech Solutions is a US-based software consulting and development company located in Ahmedabad. We specialize in custom web and mobile development solutions, IoT consulting and app development, gaming solutions, enterprise mobility, and open-source services. Our team of experts uses cutting-edge technologies to help clients achieve their business objectives and improve operational efficiency. Role Description We’re seeking a Senior Rust Developer for a freelance or part-time engagement to help architect and build advanced systems that integrate Large Language Models (LLMs) like GPT, Claude, and LLaMA. You’ll contribute to Retrieval-Augmented Generation (RAG) pipelines that combine high-performance Rust backend architecture with modern AI-driven search and generation. Responsibilities: Build performant backend services in Rust to support document and knowledge retrieval. Integrate with LLMs (e.g., GPT, Claude, LLaMA) for intelligent generation and summarization. Implement prompt engineering , semantic search , and hybrid retrieval mechanisms. Manage vector embeddings using tools like FAISS, Pinecone, Chroma, or Weaviate . Design indexing and chunking strategies for TMS-related documents (e.g., logs, schedules, compliance data). Optimize latency, cost, and context window trade-offs in RAG pipelines. Track relevance and performance using custom metrics and observability tools (e.g., Arize Phoenix). Collaborate with SMEs to structure logistics-specific knowledge bases. Work with frontend teams to support UI integration for chatbot-style assistants. Requirements: 10+ years of overall experience as a software developer, with a solid foundation in building scalable systems. 5+ years of hands-on experience in Rust , working on production-grade applications or backend services. Working knowledge of Python for AI/model integration. Strong understanding of transformer models and LLM APIs. Experience with TF-IDF , semantic search , and vector similarity techniques. Prior exposure to RAG architectures or similar intelligent retrieval systems. Familiarity with tools like Pinecone, Weaviate, FAISS, or Chroma. Independent and reliable communicator, especially in remote setups. Excellent communication skills and a user-focused mindset.
Posted 2 days ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Key Responsibilities: Cloud Network Design: Design, implement, and manage network architectures for cloud environments, ensuring high availability, performance, and security across cloud platforms GCP Network Architecture mandatory. Network Configuration & Management: Configure and manage cloud networking services such as Virtual Private Cloud (VPC), subnets, IP addressing, routing, VPNs, and DNS. Connectivity and Integration: Develop and maintain connectivity solutions between on-premise networks and cloud environments, including hybrid cloud configurations and Direct Connect/ExpressRoute solutions. Security & Compliance: Implement and enforce network security policies, including firewall rules, access control lists (ACLs), and VPNs, ensuring compliance with industry standards and best practices. Network Monitoring & Troubleshooting: Continuously monitor cloud network performance, identify issues, and troubleshoot network-related problems to minimize downtime and ensure smooth operation. Performance Optimization: Analyze network performance and recommend optimizations to reduce latency, improve bandwidth utilization, and enhance overall network efficiency in the cloud. Collaboration & Documentation: Collaborate with cloud architects, DevOps teams, and other stakeholders to ensure network architecture aligns with business goals. Document network designs, configurations, and operational procedures. Automation & Scripting: Leverage automation tools and scripting languages (e.g., Python, Bash, or Terraform) to automate network configuration, provisioning, and monitoring tasks. Support & Maintenance: Provide ongoing support for cloud network infrastructure, including regular updates, patches, and configuration adjustments as needed. Disaster Recovery & Continuity: Ensure that cloud network solutions are resilient and can recover quickly in the event of network failures or disasters, including implementing DR (disaster recovery) strategies for network infrastructure. Must have GCP Cloud Network Architecture most recent experience or hands on at least 5+ Years.
Posted 2 days ago
0.0 - 3.0 years
0 Lacs
Jaipur, Rajasthan, India
On-site
Humanli.AI is a Startup founded by Alumnus of IIM' Bangalore/ISB Hyderabad & IIM' Calcutta. We are democratizing and extending technology that were accessible & consumed only by MNC’s or Fortune companies to SME and Mid-size firms. We are pioneers in bringing Knowledge Management algorithms & Large Language Models into Conversational BOT framework. Job Title: AI/ML Engineer Location: Jaipur Job Type: Full-time Experience: 0-3 years Job Description: We are looking for an experienced AI/ML & Data Engineer to join our team and contribute to the development and deployment of our AI-based solutions. As an AI/ML & Data Engineer, you will be responsible for designing and implementing data models, algorithms, and pipelines for training and deploying machine learning models. Responsibilities: Design, develop, and fine-tune Generative AI models (e.g., LLMs, GANs, VAEs, Diffusion Models). Implement Retrieval-Augmented Generation (RAG) pipelines using vector databases (FAISS, Pinecone, ChromaDB, Weaviate). Develop and integrate AI Agents for task automation, reasoning, and decision-making. Work on fine-tuning open-source LLMs (e.g., LLaMA, Mistral, Falcon) for specific applications. Optimize and deploy transformer-based architectures for NLP and vision-based tasks. Train models using TensorFlow, PyTorch, Hugging Face Transformers. Work on prompt engineering, instruction tuning, and reinforcement learning (RLHF). Collaborate with data scientists and engineers to integrate models into production systems. Stay updated with the latest advancements in Generative AI, ML, and DL. Optimize models for performance improvements, including quantization, pruning, and low-latency inference techniques. Qualification: B.Tech in Computer Science. Fresher's may apply 0-3 years of experience in data engineering and machine learning. Immediate joiners: Preferred Requirement Experience with data preprocessing, feature engineering, and model evaluation. Understanding of transformers, attention mechanisms, and large-scale training. Hands-on experience with, RAG, LangChain/LangGraph, LlamaIndex, and other agent frameworks. Understanding of prompt tuning, LoRA/QLora, and efficient parameter fine-tuning (PEFT) techniques. Strong knowledge of data modeling, data preprocessing, and feature engineering techniques. Experience with cloud computing platforms such as AWS, Azure, or Google Cloud Platform. Excellent problem-solving skills and ability to work independently and collaboratively in a team environment. Strong communication skills and ability to explain technical concepts to non-technical stakeholders.
Posted 2 days ago
5.0 years
0 Lacs
Cannanore, Kerala, India
On-site
Experience Level: 5+ Years 1. About the Role At Summit Solutions , we create scalable, high-performance mobile applications that deliver seamless experiences to users worldwide. We are looking for a Senior Flutter Developer who can plan, architect, develop, and optimize cross-platform mobile apps while ensuring top-quality performance and user experience. This role involves end-to-end ownership of mobile projects , from architecture planning to final deployment, and includes mentoring junior developers to build a strong and innovative team. 2. What You’ll Do Plan and design application architecture with scalability, maintainability, and performance in mind. Develop and maintain cross-platform mobile apps using Flutter (Dart) for both iOS and Android. Optimize app performance for low latency, high responsiveness, and minimal memory footprint . Collaborate with designers and backend engineers to deliver cohesive, pixel-perfect UI and robust integrations . Implement secure coding practices , state management (Provider, Riverpod, Bloc, or Redux), and API integrations. Create reusable, testable, and efficient code following best practices and design patterns (MVVM, Clean Architecture) . Conduct code reviews, technical discussions, and mentor junior developers to enhance team expertise. Work on CI/CD pipelines for automated builds, testing, and deployment (Fastlane, Codemagic, Azure DevOps). Stay updated with Flutter ecosystem advancements, performance tuning strategies, and mobile trends . 3. What You’ll Need 5+ years of experience in mobile application development, with at least 3+ years in Flutter (Dart) . Strong understanding of mobile app architecture planning , modular design, performance optimization and Flutter DevTool usage. Expertise in asynchronous programming, API integrations (REST & GraphQL), and state management . Experience with CI/CD, mobile DevOps , and app store deployment (iOS & Android). Should be proficient in implementing dependency injection, Flavors and Caching strategies and data synchronization should have strong platform-specific knowledge , including the use of platform channels and platform-specific SDKs Knowledge of native Android (Kotlin/Java) or iOS (Swift/Objective-C) is a plus. Hands-on experience in profiling, debugging, and optimizing Flutter apps for real-world performance. Familiarity with Azure cloud services , containerized environments (Docker), or microservices integration. Proven ability to mentor and lead junior developers in a collaborative environment. Bonus: Experience with Firebase, push notifications, deep linking, analytics, and in-app purchase integrations .
Posted 2 days ago
10.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Title: Senior CV Engineer Location: Gurugram Experience: 6–10 Years Industry: AI Product Overview: We are hiring for our esteemed client, a Series-A funded deep-tech company building a first-of-its-kind app-based operating system for Computer Vision. The team specializes in real-time video/image inference, distributed processing, and high-throughput data handling using advanced technologies and frameworks. Key Responsibilities: Lead design and implementation of complex CV pipelines (object detection, instance segmentation, industrial anomaly detection). Own major modules from concept to deployment ensuring low latency and high reliability. Transition algorithms from Python/PyTorch to optimized C++ edge GPU implementations using TensorRT, ONNX, and GStreamer. Collaborate with cross-functional teams to refine technical strategies and roadmaps. Drive long-term data and model strategies (synthetic data generation, validation frameworks). Mentor engineers and maintain high engineering standards. Required Skills & Qualifications: 6–10 years of experience in architecting and deploying CV systems. Expertise in multi-object tracking, object detection, semi/unsupervised learning. Proficiency in Python, PyTorch/TensorFlow, Modern C++, CUDA. Experience with real-time, low-latency model deployment on edge devices. Strong systems-level design thinking across ML lifecycles. Familiarity with MLOps (CI/CD for models, versioning, experiment tracking). Bachelor’s/Master’s degree in CS, EE, or related fields with strong ML and algorithmic foundations. (Preferred) Experience with NVIDIA DeepStream, GStreamer, LLMs/VLMs, open-source contributions.
Posted 2 days ago
8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About INDmoney: INDmoney is on a mission to revolutionize personal finance & wealth management by leveraging cutting-edge technology. As we continue to scale, we are looking for a Technical Architect to work on complex architectural problems, shaping INDmoney’s technology vision, architecture, API ecosystem, and innovation roadmap . This is a high-impact role, where you will define the technical strategy, scalability, and performance of INDmoney’s platform, making critical decisions that will shape the future of fintech in India. Role & Responsibilities: Architect and optimize scalable, high-performance systems for millions of users. Oversee critical technology decisions related to backend, frontend, mobile, AI, and cloud infrastructure. Design and manage INDmoney’s API architecture to enable seamless integrations and high-performance transactions. Design robust API Gateway with authentication, rate limiting, and monitoring for security & scalability. Ensure system reliability, security, and scalability through best practices in cloud architecture. Lead API strategy – REST, GraphQL, gRPC, WebSockets for fintech services, ensuring low-latency, high-availability performance thereby delivering the FASTEST trading and investment experience in the country. Collaborate & mentor engineering, product, and data teams to align technology with business goals. Drive engineering excellence , enforcing clean architecture, best coding practices, and robust What We’re Looking For: 8+ years of experience in architecting and building high performance, low latency consumer facing stacks with large scale. (Millions of users with high concurrency) Expertise in scalable system design, microservices architecture, event-driven systems . Strong experience with cloud platforms (AWS, GCP, or Azure), Kubernetes, and serverless architecture . Deep understanding of API security (OAuth, JWT, OpenID), performance optimization, and monitoring . Experience in fintech, investment platforms, or high-scale B2C applications is a big plus. Hands-on coding ability (even if not a daily responsibility) to drive technical discussions. A growth mindset , ability to think from first principles, and willingness to take ownership. Why Join Us? Shape the future of fintech by leading innovation & cutting-edge technology. Variety of problem statements to work on, ranging from transactional systems to analytical ledgers to on-prem data center interactions A fast-paced, highly entrepreneurial environment with a visionary team. Wealth creation opportunity via generous ESOPs.
Posted 2 days ago
0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Design and Development of low power, low latency, image processing electronics. Be responsible for Electronic Logic design and coding in Hardware Description Language on FPGA Create FPGA validation environment Set up the test procedures and test benches for design validation Run simulations to ensure the desired results are being achieved Efficiently carry out debugging activities Create detailed design documents Maintain records of all the development activities Interact and Engage with external developers/partner to successfully complete the design project.
Posted 2 days ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
At Airtel , we’re not just scaling connectivity—we’re redefining how India experiences digital services. With 400M+ customers across telecom, financial services, and entertainment, our impact is massive. But behind every experience is an opportunity to make it smarter . We're looking for a Product Manager – AI to drive next-gen intelligence for our customers and business. AI is a transformational technology and we are looking or skilled product managers who will work on leveraging AI to power everything from our digital platforms to customer experience. You’ll work at the intersection of machine learning, product design, and systems thinking to deliver AI-driven products that create tangible business impact—fast. What You’ll Do Lead and contribute to AI-Powered Product Strategy Define product vision and strategy for AI-led initiatives that enhance productivity, automate decisions, and personalise user interactions across Airtel platforms. Translate Business Problems into AI Opportunities Partner with operations, engineering, and data science to surface high-leverage AI use cases across workforce management, customer experience, and process automation. Build & Scale ML-Driven Products Define data product requirements, work closely with ML engineers to develop models, and integrate intelligent workflows that continuously learn and adapt. Own Product Execution End-to-End Drive roadmaps, lead cross-functional teams, launch MVPs, iterate based on real-world feedback, and scale solutions with measurable ROI. What You Need to be Successful Influential Communication - Craft clarity from complexity. You can tailor messages for execs, engineers, and field teams alike—translating AI into business value. Strategic Prioritisation - Balance business urgency with technical feasibility. You can decide what not to build, and defend those decisions with data and a narrative Systems Thinking - You can sees the big picture —how decisions in one area ripple across the business, tech stack, and user experience. High Ownership & Accountability - Operate with a founder mindset. You don't wait for direction — you can rally teams, removes blockers, deal with tough stakeholders and drives outcomes. Adaptability - You thrive in ambiguity and pivot quickly without losing sight of long-term vision—key in fast-moving digital organizations. Skills You'll Need AI / ML Fundamentals Understanding of ML model types: Supervised, unsupervised, reinforcement learning Common algorithms: Linear/logistic regression, decision trees, clustering, neural networks Model lifecycle: Training, validation, testing, tuning, deployment, monitoring Understanding of LLMs, transformers, diffusion models, vector search, etc. Familiarity with GenAI product architecture: Retrieval-Augmented Generation (RAG), prompt tuning, fine-tuning Awareness of real-time personalization, recommendation systems, ranking algorithms, etc Data Fluency Understanding Data pipelines Working knowledge of SQL and Python for analysis Understanding of data annotation, labeling, and versioning Ability to define data requirements and assess data readiness AI Product Development Defining ML problem scope: Classification vs. regression vs. ranking vs. generation Model evaluation metrics: Precision, recall, etc. A/B testing & online experimentation for ML-driven experiences ML Infrastructure Awareness Know what it takes to make things work and happen. Model deployment techniques: Batch vs real-time inference, APIs, model serving Monitoring & drift detection: How to ensure models continue performing over time Familiarity with ML platforms/tools: TensorFlow, PyTorch, Hugging Face, Vertex AI, SageMaker, etc. (at a product level) Understanding latency, cost, and resource implications of ML choices AI Ethics & Safety We care deeply about our customers, their privacy and compliance to regulation. Understand Bias and fairness in models: How to detect and mitigate them Explainability & transparency: Importance for user trust and regulation Privacy & security: Understanding implications of sensitive or PII data in AI Alignment and guardrails in generative AI systems Preferred Qualifications Experienced Machine Learning/Artificial Intelligence PMs Experience building 0-1 products, scaled platforms/ecosystem products, or ecommerce Bachelor's degree in Computer Science, Engineering, Information Systems, Analytics, Mathematics Masters degree in Business Why Airtel Digital? Massive Scale : Your products will impact 400M+ users across sectors Real-World Relevance : Solve meaningful problems for our customers — protecting our customers, spam & fraud prevention, personalised experiences, connecting homes. Agility Meets Ambition : Work like a startup with the resources of a telecom giant AI That Ships : We don’t just run experiments. We deploy models and measure real-world outcomes Leadership Access : Collaborate closely with CXOs and gain mentorship from India’s top product and tech leaders
Posted 2 days ago
5.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
We're looking for talented C++ Software Developers to join a high-impact team building cutting-edge trading systems from the ground up. If you're passionate about performance, excited by low-latency systems, and thrive in collaborative and high-performance environments. Job Description: Design and develop ultra-low-latency trading systems in C++ Optimize critical code paths and system performance Work closely with traders and quants to deploy new strategies Requirements: 4–5 years of hands-on experience in C++ (preferably in a high-frequency or low-latency environment) Strong knowledge of data structures, algorithms, and multithreading Experience with systems-level programming and performance optimization Reach out to isha@aaaglobal.co.uk
Posted 2 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough