Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 years
0 Lacs
gurgaon
On-site
DESCRIPTION Sales, Marketing and Global Services (SMGS) AWS Sales, Marketing, and Global Services (SMGS) is responsible for driving revenue, adoption, and growth from the largest and fastest growing small- and mid-market accounts to enterprise-level customers including public sector. Do you like startups? Are you interested in Cloud Computing & Generative AI? Yes? We have a role you might find interesting. Startups are the large enterprises of the future. These young companies are founded by ambitious people who have a desire to build something meaningful and to challenge the status quo. To address underserved customers, or to challenge incumbents. They usually operate in an environment of scarcity: whether that’s capital, engineering resource, or experience. This is where you come in. The Startup Solutions Architecture team is dedicated to working with these early stage startup companies as they build their businesses. We’re here to make sure that they can deploy the best, most scalable, and most secure architectures possible – and that they spend as little time and money as possible doing so. We are looking for technical builders who love the idea of working with early stage startups to help them as they grow. In this role, you’ll work directly with a variety of interesting customers and help them make the best (and sometimes the most pragmatic) technical decisions along the way. You’ll have a chance to build enduring relationships with these companies and establish yourself as a trusted advisor. As well as spending time working directly with customers, you’ll also get plenty of time to “sharpen the saw” and keep your skills fresh. We have more than 175 services across a range of different categories and it’s important that we can help startups take advantages of the right ones. You’ll also play an important role as an advocate with our product teams to make sure we are building the right products for the startups you work with. And for the customers you don’t get to work with on a 1:1 basis you’ll get the chance to share your knowledge more broadly by working on technical content and presenting at events. A day in the life You’re surrounded by innovation. You’re empowered with a lot of ownership. Your growth is accelerated. The work is challenging. You have a voice here and are encouraged to use it. Your experience and career development is in your hands. We live our leadership principles every day. At Amazon, it's always "Day 1". Diverse Experiences Amazon values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship and Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. BASIC QUALIFICATIONS 8+ years of specific technology domain areas (e.g. software development, cloud computing, systems engineering, infrastructure, security, networking, data & analytics) experience 3+ years of design, implementation, or consulting in applications and infrastructures experience 10+ years of IT development or implementation/consulting in the software or Internet industries experience PREFERRED QUALIFICATIONS Experience in developing and deploying large scale machine learning or deep learning models and/or systems into production, including batch and real-time data processing Experience scaling model training and inference using technologies like Slurm, ParallelCluster, Amazon SageMaker Hands-on experience benchmarking and optimizing performance of models on accelerated computing (GPU, TPU, AI ASICs) clusters with high-speed networking. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 22 hours ago
8.0 years
0 Lacs
bengaluru
On-site
DESCRIPTION Sales, Marketing and Global Services (SMGS) AWS Sales, Marketing, and Global Services (SMGS) is responsible for driving revenue, adoption, and growth from the largest and fastest growing small- and mid-market accounts to enterprise-level customers including public sector. Do you like startups? Are you interested in Cloud Computing & Generative AI? Yes? We have a role you might find interesting. Startups are the large enterprises of the future. These young companies are founded by ambitious people who have a desire to build something meaningful and to challenge the status quo. To address underserved customers, or to challenge incumbents. They usually operate in an environment of scarcity: whether that’s capital, engineering resource, or experience. This is where you come in. The Startup Solutions Architecture team is dedicated to working with these early stage startup companies as they build their businesses. We’re here to make sure that they can deploy the best, most scalable, and most secure architectures possible – and that they spend as little time and money as possible doing so. We are looking for technical builders who love the idea of working with early stage startups to help them as they grow. In this role, you’ll work directly with a variety of interesting customers and help them make the best (and sometimes the most pragmatic) technical decisions along the way. You’ll have a chance to build enduring relationships with these companies and establish yourself as a trusted advisor. As well as spending time working directly with customers, you’ll also get plenty of time to “sharpen the saw” and keep your skills fresh. We have more than 175 services across a range of different categories and it’s important that we can help startups take advantages of the right ones. You’ll also play an important role as an advocate with our product teams to make sure we are building the right products for the startups you work with. And for the customers you don’t get to work with on a 1:1 basis you’ll get the chance to share your knowledge more broadly by working on technical content and presenting at events. A day in the life You’re surrounded by innovation. You’re empowered with a lot of ownership. Your growth is accelerated. The work is challenging. You have a voice here and are encouraged to use it. Your experience and career development is in your hands. We live our leadership principles every day. At Amazon, it's always "Day 1". Diverse Experiences Amazon values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship and Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. BASIC QUALIFICATIONS 8+ years of specific technology domain areas (e.g. software development, cloud computing, systems engineering, infrastructure, security, networking, data & analytics) experience 3+ years of design, implementation, or consulting in applications and infrastructures experience 10+ years of IT development or implementation/consulting in the software or Internet industries experience PREFERRED QUALIFICATIONS Experience in developing and deploying large scale machine learning or deep learning models and/or systems into production, including batch and real-time data processing Experience scaling model training and inference using technologies like Slurm, ParallelCluster, Amazon SageMaker Hands-on experience benchmarking and optimizing performance of models on accelerated computing (GPU, TPU, AI ASICs) clusters with high-speed networking. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 22 hours ago
0 years
0 Lacs
bengaluru
On-site
JOB DESCRIPTION Are you looking for an exciting opportunity to join a dynamic and growing team in a fast paced and challenging area? This is a unique opportunity for you to work in our team to partner with the Business to provide a comprehensive view. As a Senior AI Reliability Engineer at JPMorgan Chase within the Technology and Operations division, you will join our dynamic team of innovators and technologists. Your mission will be to enhance the reliability and resilience of AI systems that revolutionize how the Bank services and advises clients. You will focus on ensuring the robustness and availability of AI models, deepening client engagements, and promoting process transformation. We seek team members passionate about leveraging advanced reliability engineering practices, AI observability, and incident response strategies to solve complex business challenges through high-quality, cloud-centric software delivery. Job Responsibilities: Develop and refine Service Level Objectives( including metrics like accuracy, fairness, latency, drift targets, TTFT (Time To First Token), and TPOT (Time Per Output Token)) for large language model serving and training systems, balancing availability/latency with development velocity Design, implement and continuously improve monitoring systems including availability, latency and other salient metrics Collaborate in the design and implementation of high-availability language model serving infrastructure capable of handling the needs of high-traffic internal workloads Champion site reliability culture and practices, providing technical leadership and influence across teams to foster a culture of reliability and resilience Champion site reliability culture and practices and exerts technical influence throughout your team Develop and manage automated failover and recovery systems for model serving deployments across multiple regions and cloud providers Develop AI Incident Response playbooks for AI-specific failures like sudden drift or bias spikes, including automated rollbacks and AI circuit breakers. Lead incident response for critical AI services, ensuring rapid recovery and systematic improvements from each incident Build and maintain cost optimization systems for large-scale AI infrastructure, ensuring efficient resource utilization without compromising performance. Engineer for Scale and Security, leveraging techniques like load balancing, caching, optimized GPU scheduling, and AI Gateways for managing traffic and security. Collaborate with ML engineers to ensure seamless integration and operation of AI infrastructure, bridging the gap between development and operations. Implement Continuous Evaluation, including pre-deployment, pre-release, and continuous post-deployment monitoring for drift and degradation. Required qualifications, capabilities, and skills: Demonstrated proficiency in reliability, scalability, performance, security, enterprise system architecture, toil reduction, and other site reliability best practices Proficient knowledge and experience in observability such as white and black box monitoring, service level objective alerting, and telemetry collection using tools such as Grafana, Dynatrace, Prometheus, Datadog, Splunk, and others Proficient with continuous integration and continuous delivery tools like Jenkins, GitLab, or Terraform Proficient with container and container orchestration: (ECS, Kubernetes, Docker) Experience with troubleshooting common networking technologies and issues Understand the unique challenges of operating AI infrastructure, including model serving, batch inference, and training pipelines Have proven experience implementing and maintaining SLO/SLA frameworks for business-critical services Comfortable working with both traditional metrics (latency, availability) and AI-specific metrics (model performance, training convergence) Can effectively bridge the gap between ML engineers and infrastructure teams Have excellent communication skills Preferred qualifications, capabilities, and skills Experience with AI-specific observability tools and platforms, such as OpenTelemetry and OpenInference. Familiarity with AI incident response strategies, including automated rollbacks and AI circuit breakers. Knowledge of AI-centric SLOs/SLAs, including metrics like accuracy, fairness, drift targets, TTFT (Time To First Token), and TPOT (Time Per Output Token). Expertise in engineering for scale and security, including load balancing, caching, optimized GPU scheduling, and AI Gateways. Experience with continuous evaluation processes, including pre-deployment, pre-release, and post-deployment monitoring for drift and degradation. Understand ML model deployment strategies and their reliability implications Have contributed to open-source infrastructure or ML tooling Have experience with chaos engineering and systematic resilience testing ABOUT US
Posted 22 hours ago
3.0 years
5 - 6 Lacs
india
Remote
Machine Learning Engineer (Mechanical Systems) A role within the Travel & Transportation sector focused on fleet operations, vehicle reliability, and operational efficiency. The position combines mechanical engineering expertise with applied machine learning to deliver predictive-maintenance, telematics analytics, and optimization solutions for vehicle fleets and ground operations. Fully remote opportunity based in India supporting product and operations teams to reduce downtime, cut costs, and improve passenger experience through data-driven engineering. Role & Responsibilities Design and implement end-to-end ML solutions for fleet telematics and condition-based monitoring—data ingestion ➜ feature engineering ➜ model training ➜ deployment. Analyze sensor and vehicle telemetry (accelerometers, vibration, temperature, CAN/OBD-II) to build anomaly-detection and predictive-maintenance models that reduce unscheduled downtime. Work with mechanical engineering data (vibration signatures, thermodynamics metrics, component wear patterns) to translate physics insights into robust ML features and digital twins. Deploy and maintain production models using MLOps best practices: CI/CD pipelines, model versioning, monitoring, and automated retraining triggers. Collaborate with operations, field engineers, and product managers to prioritize use-cases, validate models in real environments, and close the loop with feedback data. Create documentation, run knowledge transfers, and establish testing/validation protocols for ML-driven mechanical diagnostics. Skills & Qualifications Must-Have Bachelor's degree in Mechanical Engineering, Mechatronics, Computer Science, or equivalent practical experience. 3+ years experience building ML models for time-series or sensor data using Python and libraries such as scikit-learn, pandas, NumPy. Hands-on experience with signal processing, feature extraction from vibration/telemetry, and anomaly-detection techniques. Proven experience deploying models to production (containerisation, REST APIs, cloud or edge inference) and familiarity with MLOps workflows. Practical mechanical engineering knowledge—failure modes, vibration analysis, basic CAD interpretation—that enables cross-functional collaboration with field teams. Strong version control (Git), unit testing, and data quality/ETL skills; comfortable with SQL and time-series databases. Preferred Experience with deep-learning frameworks (TensorFlow or PyTorch) for sequence models or 1D convolutional architectures. Exposure to telematics platforms, CAN/OBD-II data, MQTT, and IoT ingestion pipelines. Familiarity with cloud platforms (AWS/GCP/Azure), Docker/Kubernetes, and monitoring tools (Prometheus/Grafana). Benefits & Culture Highlights Fully remote role with flexible hours to support distributed teams across India. Hands-on ownership of high-impact projects that directly reduce fleet costs and improve service reliability. Collaborative environment bridging field engineering and data science—opportunities for cross-skilling and mentorship. To apply: send a concise CV and portfolio or links to relevant projects demonstrating sensor-data ML, predictive-maintenance, or deployed ML systems. Candidates who can show real-world impact (downtime reduction, false-positive reduction, cost savings) will be prioritized. Skills: machine learning,learning,maintenance
Posted 22 hours ago
0 years
0 Lacs
pune, maharashtra, india
Remote
neoBIM is a well-funded start-up software company revolutionizing the way architects design buildings with our innovative BIM (Building Information Modelling) software. As we continue to grow, we are building a small and talented team of developers to drive our software forward. Tasks We are looking for a highly skilled Generative AI Developer to join our AI team. The ideal candidate should have strong expertise in deep learning, large language models (LLMs), multimodal AI, and generative models (GANs, VAEs, Diffusion Models, or similar techniques) . This role offers the opportunity to work on cutting-edge AI solutions, from training models to deploying AI-driven applications that redefine automation and intelligence. Develop, fine-tune, and optimize Generative AI models , including LLMs, GANs, VAEs, Diffusion Models, and Transformer-based architectures . Work with large-scale datasets and design self-supervised or semi-supervised learning pipelines . Implement multimodal AI systems that combine text, images, audio, and structured data. Optimize AI model inference for real-time applications and large-scale deployment. Build AI-driven applications for BIM (Building Information Modeling), content generation, and automation . Collaborate with data scientists, software engineers, and domain experts to integrate AI into production. Stay ahead of AI research trends and incorporate state-of-the-art methodologies . Deploy models using cloud-based ML pipelines (AWS/GCP/Azure) and edge computing solutions . Requirements Must-Have Skills Strong programming skills in Python (PyTorch, TensorFlow, JAX, or equivalent). Experience in training and fine-tuning Large Language Models (LLMs) like GPT, BERT, LLaMA, or Mixtral . Expertise in Generative AI techniques , including Diffusion Models (e.g., Stable Diffusion, DALL-E, Imagen), GANs, VAEs . Hands-on experience with transformer-based architectures (e.g., Vision Transformers, BERT, T5, GPT, etc.) . Experience with MLOps frameworks for scaling AI applications (Docker, Kubernetes, MLflow, etc.). Proficiency in data preprocessing, feature engineering, and AI pipeline development . Strong background in mathematics, statistics, and optimization related to deep learning. Good-to-Have Skills Experience in NeRFs (Neural Radiance Fields) for 3D generative AI . Knowledge of AI for Architecture, Engineering, and Construction (AEC) . Understanding of distributed computing (Ray, Spark, or Tensor Processing Units). Familiarity with AI model compression and inference optimization (ONNX, TensorRT, quantization techniques) . Experience in cloud-based AI development (AWS/GCP/Azure) . Benefits Work on high-impact AI projects at the cutting edge of Generative AI . Competitive salary with growth opportunities. Access to high-end computing resources for AI training & development. A collaborative, research-driven culture focused on innovation & real-world impact . Flexible work environment with remote options.
Posted 23 hours ago
10.0 years
0 Lacs
karnataka, india
Remote
As a global leader in cybersecurity, CrowdStrike protects the people, processes and technologies that drive modern organizations. Since 2011, our mission hasn’t changed — we’re here to stop breaches, and we’ve redefined modern security with the world’s most advanced AI-native platform. We work on large scale distributed systems, processing almost 3 trillion events per day and this traffic is growing daily. Our customers span all industries, and they count on CrowdStrike to keep their businesses running, their communities safe and their lives moving forward. We’re also a mission-driven company. We cultivate a culture that gives every CrowdStriker both the flexibility and autonomy to own their careers. We’re always looking to add talented CrowdStrikers to the team who have limitless passion, a relentless focus on innovation and a fanatical commitment to our customers, our community and each other. Ready to join a mission that matters? The future of cybersecurity starts with you. About The Role As an engineer in this team, you will play an integral role as we build out our ML Platform & GenAI Studio from the ground up. Since the launch of ChatGPT, # Phishing attacks has increased by 138% and hence the ML platform is a critical capability for Crowdstrike in its fight against bad actors. For this mission we are building a team in Bangalore. You will collaborate closely with Data Platform Software Engineers, Data Scientists & Threat Analysts to design, implement, and maintain scalable ML pipelines that will be used for Data Preparation, Cataloguing, Feature Engineering, Model Training, and Model Serving that influence critical business decisions. You’ll be a key contributor in a production-focused culture that bridges the gap between model development and operational success. Future plans include generative AI investments for use cases such as modelling attack paths for IT assets. Location: Bangalore Candidates must be comfortable to visit office once a week What You’ll Do Help design, build and facilitate adoption of a modern ML platform including support for use cases like GenAI Understand current ML workflows, anticipate future needs and identify common patterns and exploit opportunities to templatize into repeatable components for model development, deployment, and monitoring Build a platform that scales to thousands of users and offers self-service capability to build ML experimentation, training and inference pipelines Leverage workflow orchestration tools to deploy efficient and scalable execution of complex data and ML pipelines Champion software development best practices around building distributed systems Leverage cloud services like Kubernetes, blob storage, and queues in our cloud first environment What You’ll Need B.S. /MS in Computer Science or a related field and 10+ years related experience; or M.S. with 8+ years of experience; 3+ years experience developing and deploying machine learning solutions to production. Familiarity with typical machine learning workflows from an engineering perspective (how they are built and used, not necessarily the theory); familiarity with supervised / unsupervised approaches: how, why, and when and labelled data is created and used 3+ years experience with ML Platform tools like Jupyter Notebooks, NVidia Workbench, MLFlow, Ray etc. Experience building data platform product(s) or features with (one of) Apache Spark, Flink or comparable tools Proficiency in distributed computing and orchestration technologies (Kubernetes, Airflow, etc.) Production experience with infrastructure-as-code tools such as Terraform, FluxCD Expert level experience with Python; Java/Scala exposure is recommended. Ability to write Python interfaces to provide standardized and simplified interfaces for data scientists to utilize internal Crowdstrike tools Expert level experience with containerization frameworks Strong analytical and problem solving skills, capable of working in a dynamic environment Exceptional interpersonal and communication skills. Work with stakeholders across multiple teams and synthesize their needs into software interfaces and processes. Bonus Points Critical Skills Needed for Role: Distributed Systems Knowledge Data/ML Platform Experience Experience With The Following Is Desirable Go Iceberg (highly desirable) Pinot or other time-series/OLAP-style database Jenkins Parquet Protocol Buffers/GRPC Benefits Of Working At CrowdStrike Remote-friendly and flexible work culture Market leader in compensation and equity awards Comprehensive physical and mental wellness programs Competitive vacation and holidays for recharge Paid parental and adoption leaves Professional development opportunities for all employees regardless of level or role Employee Networks, geographic neighborhood groups, and volunteer opportunities to build connections Vibrant office culture with world class amenities Great Place to Work Certified™ across the globe CrowdStrike is proud to be an equal opportunity employer. We are committed to fostering a culture of belonging where everyone is valued for who they are and empowered to succeed. We support veterans and individuals with disabilities through our affirmative action program. CrowdStrike is committed to providing equal employment opportunity for all employees and applicants for employment. The Company does not discriminate in employment opportunities or practices on the basis of race, color, creed, ethnicity, religion, sex (including pregnancy or pregnancy-related medical conditions), sexual orientation, gender identity, marital or family status, veteran status, age, national origin, ancestry, physical disability (including HIV and AIDS), mental disability, medical condition, genetic information, membership or activity in a local human rights commission, status with regard to public assistance, or any other characteristic protected by law. We base all employment decisions--including recruitment, selection, training, compensation, benefits, discipline, promotions, transfers, lay-offs, return from lay-off, terminations and social/recreational programs--on valid job requirements. If you need assistance accessing or reviewing the information on this website or need help submitting an application for employment or requesting an accommodation, please contact us at recruiting@crowdstrike.com for further assistance.
Posted 23 hours ago
10.0 years
0 Lacs
india
On-site
Company Description 👋🏼 We're Nagarro. We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale across all devices and digital mediums, and our people exist everywhere in the world (17500+ experts across 39 countries, to be exact). Our work culture is dynamic and non-hierarchical. We're looking for great new colleagues. That's where you come in! Job Description REQUIREMENTS: Total experience 10+ years. Should have experience to architect and design scalable, autonomous AI systems with a focus on Generative AI, agentic frameworks, and foundation models Should be able to lead the evaluation and integration of open-source and proprietary LLMs (e.g., GPT, Claude, LLaMA, Mistral). Should be able to define architectures for autonomous agents, multimodal models (text, image, audio), Retrieval-Augmented Generation (RAG) systems, and natural language to SQL pipelines. Should be able to collaborate with product, research, and engineering teams to align AI initiatives with business strategy. Should have hands-on experience with frameworks such as Hugging Face Transformers, LangChain, OpenAI API, or similar. Deep understanding of agent orchestration frameworks (e.g., AutoGen, CrewAI, LangGraph), LLM fine-tuning (e.g., LoRA, PEFT), RAG pipelines, and vector databases (e.g., FAISS, Pinecone). Should be able to design solutions leveraging cloud-native architecture (AWS, GCP, Azure), with emphasis on GPU infrastructure. Must have experience to develop modular, goal-driven AI systems capable of task planning, memory persistence, context tracking, and multi-agent collaboration. Should have experience building autonomous AI agents using frameworks such as AutoGPT, LangGraph, CrewAI, or ReAct. Strong understanding of task decomposition, memory management, and tool augmentation in agentic workflows. Should have familiarity with long-context memory techniques and multi-agent coordination strategies. Must have hands-on experience with agent simulations, LLM-driven planning, and dynamic tool selection. Should have knowledge of how to embed feedback, reasoning loops (e.g., Reflexion, Chain-of-Thought), and self-correction mechanisms into GenAI agents. Must have experience with human-in-the-loop (HITL) oversight and reinforcement learning for autonomous systems. Exhibit excellent communication, leadership, and collaboration skills. RESPONSIBILITIES: Understanding the client’s business use cases and technical requirements and be able to convert them into technical design which elegantly meets the requirements. Must be able to establish best practices for fine-tuning, prompt engineering, model compression, and inference optimization. Should be able to Implement robust MLOps practices for model versioning, deployment, monitoring, and governance. Address ethical AI concerns including bias, hallucination, and data privacy in generative applications. Provide technical leadership in building and deploying generative models (LLMs, diffusion models, transformers, etc.). Ensure the responsible deployment of autonomous agents through safety layering, feedback loops, and auditability mechanisms. Mapping decisions with requirements and be able to translate the same to developers. Identifying different solutions and being able to narrow down the best option that meets the clients’ requirements. Defining guidelines and benchmarks for NFR considerations during project implementation. Writing and reviewing design document explaining overall architecture, framework, and high-level design of the application for the developers. Reviewing architecture and design on various aspects like extensibility, scalability, security, design patterns, user experience, NFRs, etc., and ensure that all relevant best practices are followed. Developing and designing the overall solution for defined functional and non-functional requirements; and defining technologies, patterns, and frameworks to materialize it. Understanding and relating technology integration scenarios and applying these learnings in projects. Resolving issues that are raised during code/review, through exhaustive systematic analysis of the root cause, and being able to justify the decision taken. Carrying out POCs to make sure that suggested design/technologies meet the requirements. Qualifications Bachelor’s or master’s degree in computer science, Information Technology, or a related field. Additional Information https://www.youtube.com/watch?v=QLpJ69AJ7As
Posted 1 day ago
0 years
0 Lacs
india
Remote
Join Proximity Works , one of the world’s most ambitious AI technology companies, shaping the future of Sports, Media, and Entertainment. Since 2019, Proximity Works has created and scaled AI-driven products used by 697 million daily users , generating $73.5 billion in enterprise value for our partners. With headquarters in San Francisco and offices in Los Angeles, Dubai, Mumbai, and Bangalore, we partner with some of the biggest global brands to solve complex problems with cutting-edge AI. We are looking for a Senior Data Scientist with deep expertise in large language models (LLMs), retrieval-augmented generation (RAG), and multimodal learning to shape the next generation of intelligent, scalable, and reliable search systems. Role Summary This is a hands-on applied science role at the frontier of AI. You will design, fine-tune, and optimize large-scale language and multimodal models, productionize retrieval-augmented pipelines, and define robust evaluation frameworks. You will work closely with engineering and product teams to build systems that combine language, vision, and retrieval modalities —delivering real-world, user-facing AI applications at scale. What You’ll Do Design, fine-tune, and optimize LLMs for applied multimodal generation use cases. Build and productionize RAG pipelines that combine embedding-based search, metadata filtering, and LLM-driven re-ranking/summarization. Apply prompt engineering, RAG techniques, and model distillation to improve grounding, reduce hallucinations, and ensure output reliability. Define and implement evaluation metrics across semantic search (nDCG, Recall@K, MRR) and generation quality (grounding accuracy, hallucination rate). Optimize inference pipelines for latency-sensitive use cases with strategies like token budgeting, prompt compression, and sub-100ms response targets. Train and adapt models via transfer learning, LoRA/QLoRA, and checkpoint reloading , ensuring robust deployment in production environments. Collaborate with product and research teams to explore innovative multimodal integrations for user-facing applications. What Success Looks Like Deployment of production-ready LLM + RAG pipelines powering global-scale search and discovery applications. Demonstrable improvements in grounding accuracy and hallucination reduction across deployed systems. Consistent delivery of sub-100ms inference latency for generation workloads. Adoption of rigorous evaluation metrics that drive continuous model improvement. Effective cross-functional collaboration with engineering, product, and research teams. What You’ll Need Strong background in NLP, machine learning, and multimodal AI . Proven hands-on experience in LLM fine-tuning, RAG, distillation, and evaluation of foundation models . Expertise in semantic search and retrieval pipelines (e.g., FAISS, Weaviate, Vespa, Pinecone ). Demonstrated ability to deploy models at scale , including distributed inference setups. Solid understanding of evaluation frameworks for ranking, retrieval, and generation. Proficiency in Python , PyTorch/TensorFlow , and modern ML toolkits. Experience in multimodal AI (bridging text, vision, or speech with LLMs). Track record of shipping latency-sensitive AI products . Strong communication skills and the ability to collaborate with cross-functional global teams . Success Traits Research-to-production mindset – thrives in moving cutting-edge methods into real-world systems. High ownership – proactive in identifying performance bottlenecks and solving them. Analytical rigor – strong in defining and applying evaluation metrics. Curiosity & creativity – eager to push the boundaries of LLMs, RAG, and multimodal AI. Collaborative spirit – effective working across global engineering and product teams. Why Join Proximity Works Work directly on frontier AI problems shaping the future of search, sports, streaming, and media. Be part of a global-first, high-performance AI research and engineering culture . Competitive compensation aligned with global markets, with remote-first flexibility. Annual global off-sites with Proxonauts from San Francisco, Dubai, India, and beyond. High autonomy, direct accountability, and the opportunity to ship multimodal AI systems at global scale .
Posted 1 day ago
6.0 years
0 Lacs
india
Remote
Join Proximity Works , one of the world’s most ambitious AI technology companies, shaping the future of Sports, Media, and Entertainment. Since 2019, Proximity Works has created and scaled AI-driven products used by 697 million daily users , generating $73.5 billion in enterprise value for our partners. Headquartered in San Francisco with offices in Los Angeles, Dubai, Mumbai, and Bangalore, we help global brands discover high-impact AI use cases, build transformative tech stacks, and scale to hundreds of millions of users. If you’re excited about building high-performance backend systems at the frontier of AI, this role will give you the opportunity to make global impact. Role Summary We are seeking a Backend Engineer to design, build, and scale resilient microservices and APIs that power next-generation AI products. You will partner closely with ML engineers and data scientists to productionize LLMs, RAG pipelines, and multimodal models , ensuring inference is fast, cost-efficient, and production-grade. This is a hands-on role for someone passionate about distributed systems, performance optimization, and bringing cutting-edge AI to millions of users. What You’ll Do Design and build scalable microservices that power Proximity’s AI-driven search and discovery stack. Develop backend services and APIs to support LLM-powered applications. Collaborate with ML engineers and data scientists to integrate RAG pipelines, multimodal models, and inference workloads into production. Optimize inference pipelines for latency, throughput, and cost efficiency (e.g., batching, caching, token budgeting). Own end-to-end delivery of complex backend projects, from design to deployment and monitoring. Write high-quality, maintainable code with rigorous testing and fault-tolerant practices. Drive operational excellence through performance tuning, incident response, and root cause analysis. Work cross-functionally with Product Managers, Data Scientists, and global engineering teams to translate business needs into scalable technical solutions. What Success Looks Like Robust, resilient backend systems powering AI-driven applications for Proximity’s global partners. Consistent reduction in inference latency and infrastructure costs. High availability and fault tolerance across production services. Rapid, collaborative feature delivery with product and ML teams. Clear documentation and monitoring practices that ensure operational smoothness. What You’ll Need Bachelor’s or Master’s degree in Computer Science or a related field. 4–6 years of backend development experience , ideally with exposure to AI or large-scale data systems. Proficiency in Java, Golang, or Python with strong coding and system design fundamentals. Experience designing and scaling distributed systems at production scale. Exposure to LLM inference setups (e.g., vLLM, Hugging Face Inference, Triton). Strong debugging, profiling, and performance tuning skills for latency-sensitive applications. Knowledge of storage systems, query optimization, and caching strategies . Hands-on experience with AWS (preferred) , Kafka , and CI/CD pipelines . Ability to work autonomously and deliver in fast-paced environments. Passion for mentoring engineers and leading by example. Curiosity about ad-tech and search systems , and how to optimize them for user and business outcomes. Success Traits Builder’s mindset · High ownership · Analytical clarity · Collaborative spirit · Global mindset · Growth orientation Why Join Proximity Works Work directly on frontier AI problems with some of the world’s largest sports, media, and entertainment brands. Be part of a global-first, high-performance engineering culture . Competitive compensation aligned with global markets, with remote-first flexibility. Annual global off-sites with Proxonauts from San Francisco, Dubai, India, and beyond. High autonomy, direct accountability, and the opportunity to ship AI systems at scale .
Posted 1 day ago
4.0 - 6.0 years
0 Lacs
india
Remote
Join Proximity Works, one of the world's most ambitious AI technology companies, shaping the future of Sports, Media, and Entertainment. Since 2019, Proximity Works has created and scaled AI-driven products used by 697 million daily users, generating $73.5 billion in enterprise value for our partners. Headquartered in San Francisco with offices in Los Angeles, Dubai, Mumbai, and Bangalore, we help global brands discover high-impact AI use cases, build transformative tech stacks, and scale to hundreds of millions of users. If you're excited about building high-performance backend systems at the frontier of AI, this role will give you the opportunity to make global impact. Role Summary We are seeking a Backend Engineer to design, build, and scale resilient microservices and APIs that power next-generation AI products. You will partner closely with ML engineers and data scientists to productionize LLMs, RAG pipelines, and multimodal models, ensuring inference is fast, cost-efficient, and production-grade. This is a hands-on role for someone passionate about distributed systems, performance optimization, and bringing cutting-edge AI to millions of users. What You'll Do Design and build scalable microservices that power Proximity's AI-driven search and discovery stack Develop backend services and APIs to support LLM-powered applications. Collaborate with ML engineers and data scientists to integrate RAG pipelines, multimodal models, and inference workloads into production Optimize inference pipelines for latency, throughput, and cost efficiency (e.g., batching, caching, token budgeting) Own end-to-end delivery of complex backend projects, from design to deployment and monitoring Write high-quality, maintainable code with rigorous testing and fault-tolerant practices Drive operational excellence through performance tuning, incident response, and root cause analysis Work cross-functionally with Product Managers, Data Scientists, and global engineering teams to translate business needs into scalable technical solutions What Success Looks Like Robust, resilient backend systems powering AI-driven applications for Proximity's global partners Consistent reduction in inference latency and infrastructure costs High availability and fault tolerance across production services Rapid, collaborative feature delivery with product and ML teams Clear documentation and monitoring practices that ensure operational smoothness. Requirements What You'll Need Bachelor's or Master's degree in Computer Science or a related field 4-6 years of backend development experience, ideally with exposure to AI or large-scale data systems Proficiency in Java, Golang, or Python with strong coding and system design fundamentals Experience designing and scaling distributed systems at production scale Exposure to LLM inference setups (e.g., vLLM, Hugging Face Inference, Triton) Strong debugging, profiling, and performance tuning skills for latency-sensitive applications Knowledge of storage systems, query optimization, and caching strategies Hands-on experience with AWS (preferred), Kafka, and CI/CD pipelines Ability to work autonomously and deliver in fast-paced environments Passion for mentoring engineers and leading by example Curiosity about ad-tech and search systems, and how to optimize them for user and business outcomes Success Traits Builder's mindset High ownership Analytical clarity Collaborative spirit Global mindset Growth orientation Benefits Why Join Proximity Works Work directly on frontier AI problems with some of the world's largest sports, media, and entertainment brands Be part of a global-first, high-performance engineering culture Competitive compensation aligned with global markets, with remote-first flexibility Annual global off-sites with Proxonauts from San Francisco, Dubai, India, and beyond High autonomy, direct accountability, and the opportunity to ship AI systems at scale
Posted 1 day ago
6.0 years
0 Lacs
hyderabad, telangana, india
On-site
Job Description We have an exciting and rewarding opportunity for you to take your career to the next level As an Applied AI ML Sr Associate at JPMorgan Chase within the Corporate Sector – AIML Data Platforms, you will lead a specialized technical area, driving impact across teams, technologies, and projects. In this role, you will leverage your deep knowledge of machine learning, software engineering, and product management to spearhead multiple complex ML projects and initiatives, serving as the primary decision-maker and a catalyst for innovation and solution delivery. You will be responsible for hiring, leading, and mentoring a team of Machine Learning and Software Engineers, focusing on best practices in ML engineering, with the goal of elevating team performance to produce high-quality, scalable ML solutions with operational excellence. You will engage deeply in technical aspects, reviewing code, mentoring engineers, troubleshooting production ML applications, and enabling new ideas through rapid prototyping. Your passion for parallel distributed computing, big data, cloud engineering, micro-services, automation, and operational excellence will be key. Job Responsibilities Architect and implement distributed AI/ML infrastructure, including inference, training, scheduling, orchestration, and storage. Integrate Generative AI and Classical AI within the ML Platform using state-of-the-art techniques. Implement, deliver, and support high-quality ML solutions in partnership with a team of ML Engineers. Collaborate with product teams to deliver tailored, AI/ML-driven technology solutions. Develop advanced monitoring and management tools for high reliability and scalability in AI/ML systems. Optimize AI/ML system performance by identifying and resolving inefficiencies and bottlenecks. Drive the adoption and execution of AI/ML Platform tools across various teams. Lead the entire AI/ML product life cycle through planning, execution, and future development by continuously adapting, developing new AI/ML products and methodologies, managing risks, and achieving business targets like cost, features, reusability, and reliability to support growth. Required Qualifications, Capabilities, And Skills Bachelor's degree or equivalent practical experience in a related field. 6+ years of experience in engineering management with a strong technical background in machine learning. Extensive hands-on experience with AI/ML frameworks (TensorFlow, PyTorch, JAX, scikit-learn). Deep expertise in Cloud Engineering (AWS, Azure, GCP) and Distributed Micro-service architecture. Experienced with Kubernetes ecosystem, including EKS, Helm, and custom operators. Background in High Performance Computing, ML Hardware Acceleration (e.g., GPU, TPU, RDMA), or ML for Systems. Strategic thinker with the ability to craft and drive a technical vision for maximum business impact. Preferred Qualifications, Capabilities, And Skills Strong coding skills and experience in developing large-scale AI/ML systems. Proven track record in contributing to and optimizing open-source ML frameworks. Understanding & experience of AI/ML Platforms, LLMs, GenAI, and AI Agents. About Us JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. About The Team Our professionals in our Corporate Functions cover a diverse range of areas from finance and risk to human resources and marketing. Our corporate teams are an essential part of our company, ensuring that we’re setting our businesses, clients, customers and employees up for success.
Posted 1 day ago
12.0 years
0 Lacs
greater hyderabad area
On-site
Principal Staff Verification Engineer (VLSI Verification + AV +AI Expertise) Founded by highly respected Silicon Valley veterans - with its design centers established in Santa Clara, California. / Hyderabad/ Bangalore Our pay comprehensively beats "ALL" Semiconductor product players in the Indian market. Job Description – Staff Verification Engineer (VLSI Verification + AV +AI Expertise) Position: Staff Verification Engineer – VLSI Verification Lead Location: Hyderabad Experience: 12+ years in Functional Verification Key Protocol Experience: MIPI DSI, DisplayPort, HDMI Role Overview We are seeking a highly skilled Staff Verification Engineer with strong expertise in VLSI functional verification and a good understanding of AI model deployment for Audio/Video applications. The candidate will lead verification efforts for complex SoCs/IPs, while also collaborating with cross-functional teams on next-generation multimedia and AI-driven system use cases. Requirements Experience: 12+ years in functional verification; minimum 5+ years in Multimedia (Display, Camera, Video, Graphics) domain . Domain Expertise: Strong knowledge in Display (Pixel processing, composition, compression, MIPI DSI, DisplayPort, HDMI) and Bus/Interconnect (AHB, AXI). Multimedia technologies: Audio/Video codecs, Image Processing, SoC system use cases (Display, Camera, Video, Graphics). Good understanding of DSP, codecs (audio/video), and real-time streaming pipelines. AI accelerators – architecture understanding, verification, and deployment experience across NPUs, GPUs, and custom AI engines. SoC system-level verification with embedded RISC/DSP processors. AI/ML Skills: Experience with AI models (ex. CNN ) and statistical modeling techniques. Exposure to audio frameworks, audio solutions, and embedded platforms. Hands-on in multimedia use cases verification and system-level scenarios. Strong exposure to MIPI DSI-2, CSI-2, MIPI D-PHY, C-PHY. Verification Expertise: Proven expertise in developing/maintaining SystemVerilog/UVM-based testbenches, UVCs, sequences, checkers, coverage models. Strong understanding of OOP concepts in verification. HVL: SystemVerilog (UVM), SystemC (preferred). HDL: Verilog, SystemVerilog. Leadership & Collaboration: Mentor and guide junior verification engineers; drive closure for IP and SoC-level deliverables. Strong written and verbal communication skills; ability to convey complex technical concepts. Proven ability to plan, prioritize, and execute effectively. Debugging & Architecture Knowledge: Excellent debug skills across SoC architecture, VIP integration, and verification flows. Responsibilities AI & Multimedia (AV) Responsibilities Develop, optimize, and deploy AI models for audio and video applications, with strong focus on inference efficiency and performance optimization across NPUs, GPUs, and CPUs. Perform model evaluation, quantization, and compression to enable fast and robust inference on embedded hardware. Collaborate with cross-functional R&D, systems, and integration teams for system use case verification and commercialization support. Evaluate system performance, debug, and optimize for robustness and efficiency. Participate in industry benchmarking and trend analysis; introduce state-of-the-art architectural and technical innovations. ASIC / SoC Verification Responsibilities Lead and contribute to feature, core, and subsystem verification during ASIC design and development phases through RTL and Gate-Level simulations. Collaborate with the design team to define verification requirements, ensuring functional, performance, and power correctness. Develop and execute comprehensive test plans and drive verification closure. Create and maintain SystemVerilog/UVM testbenches, assertions, and functional coverage models. Implement and enhance automation flows to improve verification efficiency. Participate in debug activities throughout the development cycle. Apply ASIC expertise to define, model, optimize, verify, and validate IP (block/SoC) development for high-performance, low-power products. Collaborate with software and hardware architecture teams to develop strategies meeting system-level requirements. Evaluate complete design flows from RTL through synthesis, place-and-route, timing, and power usage. Write detailed technical documentation for verification methodologies, flows, and deliverables. Contact: Uday Bhaskar Mulya Technologies "Mining the Knowledge Community" Email id : muday_bhaskar@yahoo.com
Posted 1 day ago
12.0 years
0 Lacs
greater hyderabad area
On-site
Principal Staff Verification Engineer (VLSI Verification + AV +AI Expertise) Founded by highly respected Silicon Valley veterans - with its design centers established in Santa Clara, California. / Hyderabad/ Bangalore Our pay comprehensively beats "ALL" Semiconductor product players in the Indian market. Job Description – Staff Verification Engineer (VLSI Verification + AV +AI Expertise) Position: Staff Verification Engineer – VLSI Verification Lead Location: Hyderabad Experience: 12+ years in Functional Verification Key Protocol Experience: MIPI DSI, DisplayPort, HDMI Role Overview We are seeking a highly skilled Staff Verification Engineer with strong expertise in VLSI functional verification and a good understanding of AI model deployment for Audio/Video applications. The candidate will lead verification efforts for complex SoCs/IPs, while also collaborating with cross-functional teams on next-generation multimedia and AI-driven system use cases. Requirements Experience: 12+ years in functional verification; minimum 5+ years in Multimedia (Display, Camera, Video, Graphics) domain . Domain Expertise: Strong knowledge in Display (Pixel processing, composition, compression, MIPI DSI, DisplayPort, HDMI) and Bus/Interconnect (AHB, AXI). Multimedia technologies: Audio/Video codecs, Image Processing, SoC system use cases (Display, Camera, Video, Graphics). Good understanding of DSP, codecs (audio/video), and real-time streaming pipelines. AI accelerators – architecture understanding, verification, and deployment experience across NPUs, GPUs, and custom AI engines. SoC system-level verification with embedded RISC/DSP processors. AI/ML Skills: Experience with AI models (ex. CNN ) and statistical modeling techniques. Exposure to audio frameworks, audio solutions, and embedded platforms. Hands-on in multimedia use cases verification and system-level scenarios. Strong exposure to MIPI DSI-2, CSI-2, MIPI D-PHY, C-PHY. Verification Expertise: Proven expertise in developing/maintaining SystemVerilog/UVM-based testbenches, UVCs, sequences, checkers, coverage models. Strong understanding of OOP concepts in verification. HVL: SystemVerilog (UVM), SystemC (preferred). HDL: Verilog, SystemVerilog. Leadership & Collaboration: Mentor and guide junior verification engineers; drive closure for IP and SoC-level deliverables. Strong written and verbal communication skills; ability to convey complex technical concepts. Proven ability to plan, prioritize, and execute effectively. Debugging & Architecture Knowledge: Excellent debug skills across SoC architecture, VIP integration, and verification flows. Responsibilities AI & Multimedia (AV) Responsibilities Develop, optimize, and deploy AI models for audio and video applications, with strong focus on inference efficiency and performance optimization across NPUs, GPUs, and CPUs. Perform model evaluation, quantization, and compression to enable fast and robust inference on embedded hardware. Collaborate with cross-functional R&D, systems, and integration teams for system use case verification and commercialization support. Evaluate system performance, debug, and optimize for robustness and efficiency. Participate in industry benchmarking and trend analysis; introduce state-of-the-art architectural and technical innovations. ASIC / SoC Verification Responsibilities Lead and contribute to feature, core, and subsystem verification during ASIC design and development phases through RTL and Gate-Level simulations. Collaborate with the design team to define verification requirements, ensuring functional, performance, and power correctness. Develop and execute comprehensive test plans and drive verification closure. Create and maintain SystemVerilog/UVM testbenches, assertions, and functional coverage models. Implement and enhance automation flows to improve verification efficiency. Participate in debug activities throughout the development cycle. Apply ASIC expertise to define, model, optimize, verify, and validate IP (block/SoC) development for high-performance, low-power products. Collaborate with software and hardware architecture teams to develop strategies meeting system-level requirements. Evaluate complete design flows from RTL through synthesis, place-and-route, timing, and power usage. Write detailed technical documentation for verification methodologies, flows, and deliverables. Contact: Uday Bhaskar Mulya Technologies "Mining the Knowledge Community" Email id : muday_bhaskar@yahoo.com
Posted 1 day ago
3.0 years
0 Lacs
gurugram, haryana, india
On-site
Roles and Responsibilities Build and maintain scalable, fault-tolerant data pipelines to support GenAI and analytics workloads across OCR, documents, and case data. Manage ingestion and transformation of semi-structured legal documents (PDF, Word, Excel) into structured formats. Enable RAG workflows by processing data into chunked, vectorized formats with metadata. Handle large-scale ingestion from multiple sources into cloud-native data lakes (S3, GCS), data warehouses (BigQuery, Snowflake), and PostgreSQL. Automate pipelines using orchestration tools like Airflow/Prefect , including retry logic, alerting, and metadata tracking. Collaborate with ML Engineers to ensure data availability, traceability, and performance for inference and training pipelines. Implement data validation and testing frameworks using Great Expectations or dbt . Integrate OCR pipelines and post-processing outputs for embedding and document search. Design infrastructure for streaming vs batch data needs and optimize for cost, latency, and reliability. Qualifications Bachelor’s or Master’s degree in Computer Science, Data Engineering, or equivalent. 3+ years of experience in building distributed data pipelines and managing multi-source ingestion. Proficiency with Python , SQL , and data tools like Pandas, PySpark. Experience working with data orchestration tools (Airflow, Prefect), and file formats like Parquet, Avro, JSON. Hands-on experience with cloud storage/data warehouse systems (S3, GCS, BigQuery, Redshift). Understanding of GenAI and vector database ingestion pipelines is a strong plus. Bonus: Experience with OCR tools (Tesseract, Google Document AI), PDF parsing libraries (PyMuPDF), and API-based document processors.
Posted 1 day ago
5.0 years
0 Lacs
bengaluru, karnataka, india
On-site
WEKA is architecting a new approach to the enterprise data stack built for the age of reasoning. NeuralMesh by WEKA sets the standard for agentic AI data infrastructure with a cloud and AI-native software solution that can be deployed anywhere. It transforms legacy data silos into data pipelines that dramatically increase GPU utilization and make AI model training and inference, machine learning, and other compute-intensive workloads run faster, work more efficiently, and consume less energy. WEKA is a pre-IPO, growth-stage company on a hyper-growth trajectory. We’ve raised $375M in capital with dozens of world-class venture capital and strategic investors. We help the world’s largest and most innovative enterprises and research organizations, including 12 of the Fortune 50, achieve discoveries, insights, and business outcomes faster and more sustainably. We’re passionate about solving our customers’ most complex data challenges to accelerate intelligent innovation and business value. If you share our passion, we invite you to join us on this exciting journey. What You'll Be Doing Develop Internal Tools: Create and maintain tools that assist in monitoring, diagnostics, and troubleshooting of WEKA's data platform. Enhance Serviceability: Implement features that improve the system's ease of maintenance, such as detailed logging, health checks, and alerting mechanisms. Collaborate with Teams: Work closely with Customer Success and R&D teams to understand their needs and translate them into effective tooling solutions. Automate Processes: Identify repetitive tasks and develop automation scripts or tools to streamline operations and reduce manual effort. Ensure Reliability: Design tools that contribute to the overall reliability and stability of the platform, facilitating quick issue resolution. Define Tooling roadmap: Assess existing tools portfolio, define strategies with respect to effective packaging and consumption of the tools in question and comprehensively outline the development roadmap. Build and maintain tools using transparent, scalable development practices with GitHub for version control, peer review, and issue tracking Problem-Solving: Ability to diagnose issues and develop solutions that enhance system supportability. Operate as a self-starter, driving projects forward independently while managing priorities and delivering results with minimal oversight. Able to collaborate with part-time contributors and build consensus Provide regular updates to management regarding the status of tools development status Participate in Proactive initiatives that help ensure the best WEKA experience. Collaborate with cross-functional teams to address and resolve customer concerns. Collaborate with management to develop and implement strategies for preventing future critical issues. Requirements Bachelor's degree or equivalent work experience. Programming Proficiency: Strong skills in languages such as Python, Go, or JavaScript for tool development is a must Experience working with DevOps tools and methodologies is required 5+ years working in industry in DevOps/Scripting/Tools Development roles. Experience working with Github or equivalent source code management platforms Technical background and experience with as many of the following technologies as possible: Networking, Storage, Linux, Public CloudsExcellent communication and interpersonal skills. Ability to thrive in a fast-paced and dynamic environment with a focus on transparency Strong sense of ownership and rigorous follow-up mentality. Proven ability to gather feedback and requirements from internal stakeholders, ensuring tools align with team needs through regular communication, collaboration, and iterative development. It's Nice If You Have Experience working as a Technical Support Engineer or Customer Success Engineer is highly preferred Experience creating tools focusing on technical analysis and supportability initiatives for Customer Success organizations in the IT industry is highly desirable Proven ability to define tooling requirements and create roadmaps aligned with business priorities is a big plus. Experience and/or understanding of AI terminologies & tools is a plus The WEKA Way We are Accountable: We take full ownership, always–even when things don’t go as planned. We lead with integrity, show up with responsibility & ownership, and hold ourselves and each other to the highest standards. We are Brave: We question the status quo, push boundaries, and take smart risks when needed. We welcome challenges and embrace debates as opportunities for growth, turning courage into fuel for innovation. We are Collaborative: True collaboration isn’t only about working together. It’s about lifting one another up to succeed collectively. We are team-oriented and communicate with empathy and respect. We challenge each other and conduct positive conflict resolution. We are being transparent about our goals and results. And together, we’re unstoppable. We are Customer Centric: Our customers are at the heart of everything we do. We actively listen and prioritize the success of our customers, and every decision we make is driven by how we can better serve, support, and empower them to succeed. When our customers win, we win. Concerned that you don’t meet every qualification above? Studies have shown that women and people of color may be less likely to apply for jobs if they don’t meet every qualification specified. At WEKA, we are committed to building a diverse, inclusive and authentic workplace. If you are excited about this position but are concerned that your past work experience doesn’t match up perfectly with the job description, we encourage you to apply anyway – you may be just the right candidate for this or other roles at WEKA. WEKA is an equal opportunity employer that prohibits discrimination and harassment of any kind. We provide equal opportunities to all employees and applicants for employment without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation and training.
Posted 1 day ago
6.0 years
0 Lacs
hyderābād
On-site
JOB DESCRIPTION We have an exciting and rewarding opportunity for you to take your career to the next level As an Applied AI ML Sr Associate at JPMorgan Chase within the Corporate Sector – AIML Data Platforms, you will lead a specialized technical area, driving impact across teams, technologies, and projects. In this role, you will leverage your deep knowledge of machine learning, software engineering, and product management to spearhead multiple complex ML projects and initiatives, serving as the primary decision-maker and a catalyst for innovation and solution delivery. You will be responsible for hiring, leading, and mentoring a team of Machine Learning and Software Engineers, focusing on best practices in ML engineering, with the goal of elevating team performance to produce high-quality, scalable ML solutions with operational excellence. You will engage deeply in technical aspects, reviewing code, mentoring engineers, troubleshooting production ML applications, and enabling new ideas through rapid prototyping. Your passion for parallel distributed computing, big data, cloud engineering, micro-services, automation, and operational excellence will be key. Job Responsibilities Architect and implement distributed AI/ML infrastructure, including inference, training, scheduling, orchestration, and storage. Integrate Generative AI and Classical AI within the ML Platform using state-of-the-art techniques. Implement, deliver, and support high-quality ML solutions in partnership with a team of ML Engineers. Collaborate with product teams to deliver tailored, AI/ML-driven technology solutions. Develop advanced monitoring and management tools for high reliability and scalability in AI/ML systems. Optimize AI/ML system performance by identifying and resolving inefficiencies and bottlenecks. Drive the adoption and execution of AI/ML Platform tools across various teams. Lead the entire AI/ML product life cycle through planning, execution, and future development by continuously adapting, developing new AI/ML products and methodologies, managing risks, and achieving business targets like cost, features, reusability, and reliability to support growth. Required qualifications, capabilities, and skills Bachelor's degree or equivalent practical experience in a related field. 6+ years of experience in engineering management with a strong technical background in machine learning. Extensive hands-on experience with AI/ML frameworks (TensorFlow, PyTorch, JAX, scikit-learn). Deep expertise in Cloud Engineering (AWS, Azure, GCP) and Distributed Micro-service architecture. Experienced with Kubernetes ecosystem, including EKS, Helm, and custom operators. Background in High Performance Computing, ML Hardware Acceleration (e.g., GPU, TPU, RDMA), or ML for Systems. Strategic thinker with the ability to craft and drive a technical vision for maximum business impact. Preferred qualifications, capabilities, and skills Strong coding skills and experience in developing large-scale AI/ML systems. Proven track record in contributing to and optimizing open-source ML frameworks. Understanding & experience of AI/ML Platforms, LLMs, GenAI, and AI Agents. ABOUT US JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. ABOUT THE TEAM Our professionals in our Corporate Functions cover a diverse range of areas from finance and risk to human resources and marketing. Our corporate teams are an essential part of our company, ensuring that we’re setting our businesses, clients, customers and employees up for success.
Posted 1 day ago
5.0 years
0 Lacs
hyderabad, telangana, india
On-site
Job Description We are seeking a talented Computer Vision Engineer with strong expertise in microservice deployment architecture to join our team. In this role, you will be responsible for developing and deploying computer vision models to analyze retail surveillance footage for use cases such as theft detection, employee efficiency monitoring, and store traffic analysis. Responsibilities You will work on designing and implementing scalable, cloud-based microservices to deliver real-time and post-event analytics to improve retail responsibilities Develop computer vision models : Build, train, and optimize deep learning models to analyze surveillance footage for detecting theft, monitoring employee productivity, tracking store busy hours, and other relevant use cases. Microservice architecture : Design and deploy scalable microservice-based solutions that allow seamless integration of computer vision models into cloud or on-premise environments. Data processing pipelines : Develop data pipelines to process real-time and batch video data streams, ensuring efficient extraction, transformation, and loading (ETL) of video data. Integrate with existing systems : Collaborate with backend and frontend engineers to integrate computer vision services with existing retail systems such as POS, inventory management, and employee scheduling. Performance optimization : Fine-tune models for high accuracy and real-time inference on edge devices or cloud infrastructure, optimizing for latency, power consumption, and resource constraints. Monitor and improve : Continuously monitor model performance in production environments, identify potential issues, and implement improvements to accuracy and efficiency. Security and privacy : Ensure compliance with industry standards for security and data privacy, particularly regarding the handling of video footage and sensitive Skills & Requirements 5+ years of proven experience in computer vision, including object detection, action recognition, and multi-object tracking, preferably in retail or surveillance applications. Hands-on experience with microservices deployment on cloud platforms (e., AWS, GCP, Azure) using Docker, Kubernetes, or similar technologies. Experience with real-time video analytics, including working with large-scale video data and camera Skills Proficiency in programming languages like Python, C++, or Java. Expertise in deep learning frameworks (e. TensorFlow, PyTorch, Keras) for developing computer vision models. Strong understanding of microservice architecture, REST APIs, and serverless computing. Knowledge of database systems (SQL, NoSQL), message queues (Kafka, RabbitMQ), and container orchestration (Kubernetes). Familiarity with edge computing and hardware acceleration (e., GPUs, TPUs) for running inference on embedded Qualifications Experience with deploying models to edge devices (NVIDIA Jetson, Coral, etc.) Understanding of retail operations and common challenges in surveillance. Knowledge of data privacy regulations such as GDPR Strong analytical and problem-solving skills. Ability to work independently and in cross-functional teams. Excellent communication skills to convey technical concepts to non-technical (ref:hirist.tech)
Posted 2 days ago
1.0 - 3.0 years
0 Lacs
bengaluru, karnataka, india
Remote
AI is the strategic bet for Microsoft. The Azure CoreAI Platform is right at the front end. Our project involves cutting edge technologies and problems. Joining the projects you will gain knowledge and experience about generative AI, large language model, transformers, GPU optimization, AI applications, etc. The AI business is also growing rapidly, and our products will directly face the impact of the large number of customers, which is continuously growing. Within the AI Platform, the CoreAI team enables data scientists and developers to build quickly and easily, train, deploy, manage, and consume machine learning models. The Core AI MaaS&RP team is hiring Software Engineers. The team designs, builds, and operates the largest scale engineering system in the industry for Large Language Models and GenAI Services. Key areas that we are working on Building and Scaling our Inferencing Cloud across our flagships AOAI Service and growing the Model as A Service model Families. Expand the Gen AI Model Offerings to more model providers, and best and latest models and provide a unified experience across AOAI and MAAS/MAAP for Azure AI customers Responsibilities Responsibilities Collaborate with appropriate stakeholders (e.g., project manager, technical lead) to determine user requirements Leads discussions for architecture of products/solutions and creates proposals for architecture by testing design hypotheses Leads by example within the team by producing extensible and maintainable code. Applies debugging tools and examines logs to verify assumptions through writing and developing code Respond, resolve, and integrate customer feedback with agility and dedication Takes ownership for key parts of the LLM platform, and delivers end to end solutions alongside product stakeholders and provides operational support. Qualifications Required: Bachelor's Degree in Computer Science, or related technical discipline and 1-3 years technical engineering experience with coding in languages including, but not limited to, Python, C#, Java, JavaScript, or equivalent experience. Preferred: Experience on python, pytorch, large language model, generative AI. Experience with distributed systems design and implementation Proficiency in Agile development practices and Continuous Integration/Continuous Deployment (CI/CD) Passion for Generative AI, LLMs, Model Training, Model Inference, Machine learning. Experience working on large-scale projects or applications Good communication skills and ability to collaborate with diverse remote teams Quick learner with a passion for solving complex and exciting problems Familiarity with Azure is a plus #IDCoreAIPlatformHiring Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include but are not limited to the following specialized security screenings: Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter. Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.
Posted 2 days ago
4.0 - 5.0 years
0 Lacs
kochi, kerala, india
On-site
The GenAI Specialist at Beinex drives innovation by designing and implementing advanced generative AI models to meet client needs. Key responsibilities include fine-tuning LLMs, optimizing scalable ML pipelines, and integrating AI into client applications for seamless performance. The role involves collaborating with stakeholders, communicating complex AI concepts, and staying updated on industry trends to ensure cutting-edge solutions. Qualifications Master’s degree in Data Science, Statistics, Engineering or a related field (applicants with a Bachelor’s degree and relevant work experience can also apply) Responsibilities Develop Next-Gen Generative AI Models: Research, design, and fine-tune state-of-the-art models like Llama, GPT, Stable Diffusion, and VAEs to drive business impact. Enhance AI Capabilities: Implement and optimize LLM fine-tuning, prompt engineering, and AI safety mechanisms to improve model efficiency and usability. Integrate AI into Products/ Services: Work closely with business and engineers to embed generative AI solutions into real-world applications, ensuring seamless user experiences. Optimize for Scale & Performance: Build and refine scalable ML pipelines for efficient model deployment, inference, and real-time adaptation. Stay Ahead of AI Trends: Keep pace with emerging technologies, model architectures, and industry breakthroughs to drive continuous innovation. Communicate complex AI concepts to technical and non-technical audiences, ensuring alignment with business strategies. Key Skills Requirements At least 4-5 years of experience in Data Science domain with strong exposure to Generative AI. Experience using Machine and Deep Learning languages, preferably Python and R, to manipulate data and draw insights from large data sets. Hands-on experience with LLMs, multimodal AI, and diffusion models for applications in text, image, or speech generation. Strong proficiency in TensorFlow, PyTorch, Hugging Face Transformers, and OpenAI APIs and custom fine tuning of LLMs and AIops toolkits. Familiarity with cloud platforms (AWS, GCP, Azure), model deployment strategies, and GPU optimization. Experience building data pipelines and data centric applications using distributed storage platforms in a production setting Ability to work with large-scale datasets, vector embeddings, and retrieval-based AI systems.
Posted 2 days ago
130.0 years
0 Lacs
bengaluru, karnataka, india
On-site
About Northern Trust Northern Trust, a Fortune 500 company, is a globally recognized, award-winning financial institution that has been in continuous operation since 1889. Northern Trust is proud to provide innovative financial services and guidance to the world’s most successful individuals, families, and institutions by remaining true to our enduring principles of service, expertise, and integrity. With more than 130 years of financial experience and over 22,000 partners, we serve the world’s most sophisticated clients using leading technology and exceptional service. Sr Analyst, Non-Financial Risk Quantification, is a key member of the Risk Analytics Team, acting as an individual contributor in the development and maintenance of high-quality risk analytics for Non-Financial Risk models. The analyst will ideate, develop and test quantitative modelling and measuring techniques, and support other aspects of analytics including automation, data engineering and present ideas and result to leadership team. The role will cover several aspects of model lifecycle including development, monitoring, recalibration, and implementation of risk models in R/Python etc. Major Duties Research and evaluate quantitative approaches for model development; coding, testing and validation Assist team members in the statistical model development, monitoring, implementation and model validation for Operational Risk models (AMA capital estimation, stress testing, capital allocation etc) Work on improving efficiencies and modelling activities that require enhancements for execution, standards and maintenance of risk models Pitch, develop, and test Machine Learning and AI models and compare the model performance against the legacy statistical models Examples of some of the commonly used techniques used by our team include OLS and Logistic Regression, Time Series Regression, Monte Carlo Simulation, Decision Trees, Gradient Boost, Bayesian Inference etc Skills Required 2-3 years of experience Sound knowledge of Probability and Statistics Experience in Data Science Projects (Statistical Modelling/Machine Learning) Programming in R/Python Knowledge/Interest in Banking and Financial Services Working With Us As a Northern Trust partner, greater achievements await. You will be part of a flexible and collaborative work culture in an organization where financial strength and stability is an asset that emboldens us to explore new ideas. Movement within the organization is encouraged, senior leaders are accessible, and you can take pride in working for a company committed to assisting the communities we serve! Join a workplace with a greater purpose. We’d love to learn more about how your interests and experience could be a fit with one of the world’s most admired and sustainable companies! Build your career with us and apply today. #MadeForGreater Reasonable accommodation Northern Trust is committed to working with and providing reasonable accommodations to individuals with disabilities. If you need a reasonable accommodation for any part of the employment process, please email our HR Service Center at MyHRHelp@ntrs.com. We hope you’re excited about the role and the opportunity to work with us. We value an inclusive workplace and understand flexibility means different things to different people. Apply today and talk to us about your flexible working requirements and together we can achieve greater. About Our Bangalore Office The Northern Trust Bangalore office, established in 2005, is home to over 5,600 employees. In this stunning office, space, we offer fantastic amenities which include our Arrival Hub – Jungle, the GameZone, and the Employee Experience Zone that appeal to both clients and employees. Learn more.
Posted 2 days ago
5.0 years
0 Lacs
hyderabad, telangana, india
Remote
Senior Data Scientist Location: India (Remote) Open Positions: 3 Experience: 5+ years in applied Data Science with production deployments Technical Requirements Core Data Science & Statistics Statistical Foundation: Demonstrated experience with experimental design, A/B testing, causal inference, and time series analysis. Must be able to explain when and why to use different statistical tests. Machine Learning: Hands-on experience building and deploying models for regression, classification, clustering, and forecasting. Should have optimized hyperparameters, handled class imbalance, and dealt with feature drift in production. Deep Learning: Practical experience with neural networks beyond tutorials - custom architectures, loss functions, and optimization strategies for specific business problems. Programming & Engineering Python Proficiency: Advanced usage of core data science stack (Pandas, NumPy, Scikit-learn). Experience with at least one deep learning framework (PyTorch preferred, TensorFlow acceptable). Production Code: Experience writing maintainable, tested code. Understanding of version control, code reviews, and collaborative development practices. Performance Optimization: Experience with profiling, vectorization, and handling datasets that don't fit in memory. LLM Experience (Good to Have) API Integration: Experience integrating with OpenAI, Anthropic, or other LLM APIs in production applications. Prompt Engineering: Systematic approach to prompt design, few-shot learning, and prompt optimization for specific tasks. Fine-tuning & Customization: Experience with parameter-efficient fine-tuning (LoRA, QLoRA) or custom training for domain-specific applications. Responsibilities Collect, clean, and analyze large datasets to derive insights and build predictive solutions. Design and deploy ML/DL models that solve business problems (customer insights, forecasting, optimization). Collaborate with cross-functional teams to integrate AI solutions into business workflows. Build dashboards, reports, and visualizations for business stakeholders. Contribute to projects involving LLM-based applications (chatbots, summarization, assistants) as needed. Stay updated with industry trends and contribute to knowledge sharing within the team.
Posted 2 days ago
7.0 years
0 Lacs
chennai, tamil nadu, india
On-site
Job Requirements Role: Python Developer Department/Function: Infromation Technology Job Purpose We are looking for a highly skilled Python Developer with solid experience in Artificial Intelligence (AI), Machine Learning (ML), and Generative AI. The ideal candidate will have hands-on experience working with GPT-4, transformer models, and deep learning frameworks, and a strong understanding of model fine-tuning, deployment, and inference. Roles And Responsibilities Design, develop, and maintain Python applications focused on AI/ML and generative AI. Build and fine-tune transformer-based models (e.g., GPT, BERT, T5) for various NLP and generative tasks. Work with large-scale datasets for training and evaluation. Implement model inference pipelines and scalable APIs using FastAPI, Flask, or similar. Collaborate with data scientists and ML engineers to build end-to-end AI solutions. Stay current with recent research and developments in the field of generative AI and ML. Technical Responsibilities Strong proficiency in Python and relevant libraries (NumPy, Pandas, Scikit-learn, etc.) 3–7+ years of experience in AI/ML development. Hands-on experience with transformer-based models, especially GPT-4, LLMs, or diffusion models. Familiarity with Hugging Face Transformers, OpenAI API, or similar frameworks. Experience with TensorFlow, PyTorch, or JAX. Experience deploying models using Docker, Kubernetes, or cloud platforms (AWS, GCP, Azure). Strong problem-solving and algorithmic thinking. Familiarity with prompt engineering, fine-tuning, and reinforcement learning with human feedback (RLHF) is a plus. Added Advantage Contributions to open-source AI/ML projects. Experience with vector databases (e.g., FAISS, Pinecone, Weaviate). Experience building AI chatbots, copilots, or creative content generators. Knowledge of MLOps and model monitoring Educational Qualifications Graduation: Bachelor of Science (B.Sc) / Bachelor of Technology (B.Tech) / Bachelor of Computer Applications (BCA) Post-Graduation: Master of Science (M.Sc) /Master of Technology (M.Tech) / Master of Computer Applications (MCA)
Posted 2 days ago
4.0 - 6.0 years
0 Lacs
new delhi, delhi, india
On-site
About KnowDis KnowDis Data Science is a leading Machine Learning company headquartered in New Delhi. We provide bespoke machine-learning solutions to clients across e-commerce, healthcare, and finance domains Overview: We are looking for an MLOps Engineer with a strong background in operationalizing AI solutions at scale. The ideal candidate will have expertise in developing and managing machine learning lifecycle frameworks and pipelines and integrating these solutions with client systems. Key Responsibilities: - Infrastructure Management: Build scalable and robust infrastructure for ML models, ensuring seamless production integration. - CI/CD Expertise: Develop and maintain CI/CD pipelines with a focus on ML model deployment. - Model Deployment and Monitoring: Deploy ML models using TensorFlow Serving, Pytorch Serving, Triton Inference Server, or TensorRT and monitor their performance in production. - Collaboration: Work closely with data scientists and software engineers to transition ML models from research to production. - Security and Compliance: Uphold security protocols and ensure regulatory compliance in ML systems. Skills and Experience Required: - Proficiency in Docker and Kubernetes for containerization and orchestration. - Experience with CI/CD pipeline development and maintenance. - Experience in deploying ML models using TensorFlow Serving, Pytorch Serving, Triton Inference Server, and TensorRT. - Experience with cloud platforms like AWS, Azure, and GCP. - Strong problem-solving, communication, and teamwork skills. Qualifications: - Bachelor’s/Master’s degree in Computer Science, Engineering, or a related field. - 4-6 years of experience in ML project management, with a recent focus on MLOps. Additional Competencies: - AI Technologies Deployment, Data Engineering, IT Performance, Scalability Testing, and Security Practices. Selection Process: Interested Candidates are mandatorily required to apply through the listing on Jigya . Only applications received through this posting will be evaluated further. Shortlisted candidates may be required to appear in an Online Assessment, Screening interview administered by Jigya Candidates selected after the screening test will be interviewed by KnowDis
Posted 2 days ago
0 years
1 - 2 Lacs
sān
On-site
As a software engineer, you’ll play a crucial role in building the first capable, habit forming voice interface that scales to a billion users. Members of our technical staff are responsible for prototyping and designing new features of our voice interface, building infrastructure to handle <500ms LLM inference for millions of requests from everywhere around the world, and scaling the personalization of our speech models and LLMs. ## *What are we looking for? * - Previous founding or startup experience - Fluency in Typescript, React, and Python - User-focused design intuition and design taste - Attention to detail and eagerness to learn - Aptitude and clarity of thought - Creativity, excellence in engineering, and code velocity We consider all qualified applicants without regard to legally protected characteristics and provide reasonable accommodations upon request. Job Type: Full-time Pay: ₹150,000.00 - ₹240,000.00 per year Work Location: In person Speak with the employer +91 9008078505
Posted 2 days ago
5.0 years
0 Lacs
hyderābād
On-site
About Celestial AI As Generative AI continues to advance, the performance drivers for data center infrastructure are shifting from systems-on-chip (SOCs) to systems of chips. In the era of Accelerated Computing, data center bottlenecks are no longer limited to compute performance, but rather the system's interconnect bandwidth, memory bandwidth, and memory capacity. Celestial AI's Photonic Fabric™ is the next-generation interconnect technology that delivers a tenfold increase in performance and energy efficiency compared to competing solutions. The Photonic Fabric™ is available to our customers in multiple technology offerings, including optical interface chiplets, optical interposers, and Optical Multi-chip Interconnect Bridges (OMIB). This allows customers to easily incorporate high bandwidth, low power, and low latency optical interfaces into their AI accelerators and GPUs. The technology is fully compatible with both protocol and physical layers, including standard 2.5D packaging processes. This seamless integration enables XPUs to utilize optical interconnects for both compute-to-compute and compute-to-memory fabrics, achieving bandwidths in the tens of terabits per second with nanosecond latencies. This innovation empowers hyperscalers to enhance the efficiency and cost-effectiveness of AI processing by optimizing the XPUs required for training and inference, while significantly reducing the TCO2 impact. To bolster customer collaborations, Celestial AI is developing a Photonic Fabric ecosystem consisting of tier-1 partnerships that include custom silicon/ASIC design, system integrators, HBM memory, assembly, and packaging suppliers. ABOUT THE ROLE Celestial AI is looking for a highly motivated and detail-oriented Software Quality Assurance (SQA) Engineer to join our team. As an SQA Engineer, you will play a critical role in ensuring the quality of our software products. You will be responsible for designing, developing, and executing test plans and test cases, identifying and reporting defects, and working closely with developers to ensure that our software meets the highest standards. ESSENTIAL DUTIES AND RESPONSIBILITIES Test Case Design & Execution: Design, document, and execute detailed test cases for firmware components, drivers, communication protocols, and system-level interactions with hardware. Hardware-Firmware Integration Testing: Lead and perform testing at the hardware-firmware interface, ensuring seamless and correct interaction between embedded software and physical components (e.g., sensors, actuators, external memory, peripherals like SPI, I2C, UART). Automation Development: Design, develop, and maintain automated test scripts and test harnesses using scripting languages (e.g., Python, Bash) and specialized tools to enhance test coverage and efficiency, particularly for regression testing. Defect Management: Identify, document, track, and verify resolution of software defects using bug tracking systems. Provide clear and concise bug reports with steps to reproduce and relevant logs. Root Cause Analysis: Collaborate with firmware developers to perform in-depth root cause analysis of defects, often involving debugging on embedded targets using JTAG/SWD, oscilloscopes, logic analyzers, and other hardware debugging tools. Performance & Resource Analysis: Monitor and analyze firmware performance metrics (CPU usage, memory footprint, power consumption, boot time, latency) and validate against specified requirements. Regression & Release Qualification: Own the regression testing process and contribute significantly to the final release qualification of firmware builds. Process Improvement: Champion and contribute to the continuous improvement of firmware development and quality assurance processes, methodologies, and best practices. QUALIFICATIONS Bachelor's degree in Electrical Engineering, Computer Engineering, Computer Science, or a related technical field. 5 years of experience in Software Quality Assurance, with a minimum of 2 years directly focused on firmware or embedded software testing . Strong understanding of embedded systems concepts , including microcontrollers/microprocessors, real-time operating systems (RTOS), interrupts, memory management, and common peripheral interfaces (GPIO, I2C, SPI, UART, ADC, DAC, Timers). Proficiency in C/C++ for embedded development, with the ability to read, understand, and debug firmware code. Experience with scripting languages for test automation (e.g., Python, Bash). Hands-on experience with hardware debugging tools such as JTAG/SWD debuggers, oscilloscopes, logic analyzers, and multimeters. Familiarity with version control systems (e.g., Git) and bug tracking tools (e.g., Jira, Azure DevOps). Experience with test management tools (e.g., TestRail, Zephyr). Excellent problem-solving skills, with a methodical and analytical approach to identifying and isolating defects. PREFERRED QUALIFICATIONS Experience with continuous integration/continuous deployment (CI/CD) pipelines for embedded systems. Knowledge of networking protocols (TCP/IP) Experience with Hardware-in-the-Loop (HIL) testing, simulation, or emulation environments. LOCATION : Hyderabad, India or Singapore We offer great benefits (health, vision, dental and life insurance), collaborative and continuous learning work environment, where you will get a chance to work with smart and dedicated people engaged in developing the next generation architecture for high performance computing. Celestial AI Inc. is proud to be an equal opportunity workplace and is an affirmative action employer. #LI-Onsite
Posted 2 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
With the rapid growth of technology and data-driven decision making, the demand for professionals with expertise in inference is on the rise in India. Inference jobs involve using statistical methods to draw conclusions from data and make predictions based on available information. From data analysts to machine learning engineers, there are various roles in India that require inference skills.
These major cities are known for their thriving tech industries and are actively hiring professionals with expertise in inference.
The average salary range for inference professionals in India varies based on experience level. Entry-level positions may start at around INR 4-6 lakhs per annum, while experienced professionals can earn upwards of INR 12-15 lakhs per annum.
In the field of inference, a typical career path may start as a Data Analyst or Junior Data Scientist, progress to a Data Scientist or Machine Learning Engineer, and eventually lead to roles like Senior Data Scientist or Principal Data Scientist. With experience and expertise, professionals can also move into leadership positions such as Data Science Manager or Chief Data Scientist.
In addition to expertise in inference, professionals in India may benefit from having skills in programming languages such as Python or R, knowledge of machine learning algorithms, experience with data visualization tools like Tableau or Power BI, and strong communication and problem-solving abilities.
As you explore opportunities in the inference job market in India, remember to prepare thoroughly by honing your skills, gaining practical experience, and staying updated with industry trends. With dedication and confidence, you can embark on a rewarding career in this field. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |