Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
WEKA is architecting a new approach to the enterprise data stack built for the age of reasoning. NeuralMesh by WEKA sets the standard for agentic AI data infrastructure with a cloud and AI-native software solution that can be deployed anywhere. It transforms legacy data silos into data pipelines that dramatically increase GPU utilization and make AI model training and inference, machine learning, and other compute-intensive workloads run faster, work more efficiently, and consume less energy. WEKA is a pre-IPO, growth-stage company on a hyper-growth trajectory. We’ve raised $375M in capital with dozens of world-class venture capital and strategic investors. We help the world’s largest and most innovative enterprises and research organizations, including 12 of the Fortune 50, achieve discoveries, insights, and business outcomes faster and more sustainably. We’re passionate about solving our customers’ most complex data challenges to accelerate intelligent innovation and business value. If you share our passion, we invite you to join us on this exciting journey. What You’ll Be Doing Our Automation Engineers are part of a team of highly skilled Python programmers that are responsible for creating test coverage for all system functionalities and configurations. This is a challenging but crucial engineering task since the testing of a feature in storage can be more complex than its actual implementation. Together with the team, you will be responsible for the product’s reliability and stability, which is a core requirement of any enterprise storage product and is vital to its success. As a Senior Software Engineer, Automation, You’ll Create detailed, well-structured test plans and test cases; Implement automated distributed tests in Python; Gain an in-depth understanding of a complex, clustered system and be able to accurately analyze failures in those systems; Collaborate with the R&D team in order to identify and analyze problems, as well as verify and test solutions; and Become a storage expert who understands the terminology, protocols, configurations, architecture, and practicalities in the frontier of storage technology. Requirements Extensive Python programming experience +8 years of experience as an Automation QA Engineer Storageֿֿ\Networking\Distributed Programming background An "under the hood" understanding of Python programming that can be used to implement a scalable distributed test environment Thorough experience as an Automation QA Engineer on a complex clustered system. The WEKA Way We are Accountable: We take full ownership, always– even when things don’t go as planned. We lead with integrity, show up with responsibility & ownership, and hold ourselves and each other to the highest standards. We are Brave: We question the status quo, push boundaries, and take smart risks when needed. We welcome challenges and embrace debates as opportunities for growth, turning courage into fuel for innovation. We are Collaborative: True collaboration isn’t only about working together. It’s about lifting one another up to succeed collectively. We are team-oriented and communicate with empathy and respect. We challenge each other and conduct positive conflict resolution. We are being transparent about our goals and results. And together, we’re unstoppable. We are Customer Centric: Our customers are at the heart of everything we do. We actively listen and prioritize the success of our customers, and every decision we make is driven by how we can better serve, support, and empower them to succeed. When our customers win, we win. Concerned that you don’t meet every qualification above? Studies have shown that women and people of color may be less likely to apply for jobs if they don’t meet every qualification specified. At WEKA, we are committed to building a diverse, inclusive and authentic workplace. If you are excited about this position but are concerned that your past work experience doesn’t match up perfectly with the job description, we encourage you to apply anyway – you may be just the right candidate for this or other roles at WEKA. WEKA is an equal opportunity employer that prohibits discrimination and harassment of any kind. We provide equal opportunities to all employees and applicants for employment without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation and training.
Posted 1 week ago
2.0 years
3 - 15 Lacs
India
On-site
Required Skills & Qualifications : - Bachelors or Masters degree in Computer Science, Machine Learning, or related field. - 2+ years of experience in building and deploying ML/AI models in production. - Strong hands-on experience with generative AI frameworks (e.g., LangChain, Hugging Face, OpenAI, Rasa, or similar). - Solid understanding of transformer architectures, embeddings, and LLM fine-tuning techniques. - Proficiency in Python and frameworks like PyTorch or TensorFlow. - Experience with voice-based AI: ASR, TTS engines (e.g., Google, ElevenLabs, Coqui), and voice orchestration tools. - Familiarity with vector databases, prompt engineering, and real-time inference techniques. - Exposure to APIs/integrations for communication platforms: WhatsApp Cloud API, Meta APIs, Instagram DM API, etc. Job Type: Full-time Pay: ₹355,466.18 - ₹1,582,506.93 per year Schedule: Day shift
Posted 1 week ago
1.0 years
4 - 6 Lacs
Coimbatore
Remote
At 360Watts, we’re rethinking what it means to own and control solar energy at home . We’re building an intelligent rooftop solar system that doesn’t just sit on your roof — it monitors, automates, and adapts , giving users full visibility and control over their energy flow. Are you a systems thinker who can build the bridge between physical and digital? A solar system with an IoT layer for automation — that will be modular and powered by AL/ML capabilities. Upgradable to user's needs from basic to advanced automation. Remote controlled by users with smart-home energy management app (EMS). >> Responsibilities Lead the design, testing and iteration of end-to-end IoT system architecture layer, powered by edge-MCUs and hybrid data flows to cloud + smart-home control hub Develop real-time firmware to read sensors, control relays, and implement safe, OTA-updatable logic. MCUs with simple to complex inference capabilities (such as ESP32-S3, Raspberry Pi CM4, Jetson Nano/Xavier NX) and maintain firmware modularity for upgrades. Define IoT use-cases, data workflow, communication protocol stacks (MODBUS RTU/TCP, MQTT) integration with inverter, battery system & cloud EMS Guide hardware intern with embedded prototyping from breadboard to PCB: wiring, testing, debugging with hardware intern and solar design engineer Collaboration with solar engineer + EMS software lead for rapid prototyping, field testing, and user-centric automation logic Drive field deployment readiness — from pilot configuration to relay switching stability, inverter integration, and offline fallback modes >> Must have Systems-oriented Embedded C/C++ Edge (AL/ML) architecture & modular firmware design Real-world firmware with control logic + sensor/relay integration Protocol stack implementation (MODBUS RTU/TCP, MQTT) OTA, structured data, and embedded fault handling System debugging and field-readiness >> Background Bachelor’s or Master's degree in Electrical Engineering, Electronics & Communication, Embedded Systems, or a related field 1-3 years of work experience Professional English language >> Job details Salary depending on skill, between Rs. 30k–50k per month Option for equity (ESOP) after 9-12 months Start date = 15.08.2025 (or) 01.09.2025 Probation = 3 months If you are excited, please apply soon. Job Type: Full-time Pay: ₹35,000.00 - ₹50,000.00 per month Benefits: Flexible schedule Paid time off Schedule: Monday to Friday Weekend availability Supplemental Pay: Performance bonus Ability to commute/relocate: Coimbatore, Tamil Nadu: Reliably commute or planning to relocate before starting work (Preferred) Education: Bachelor's (Preferred) Experience: Embedded software: 1 year (Required) Language: English (Preferred) Work Location: In person Speak with the employer +91 9087610051 Application Deadline: 27/07/2025 Expected Start Date: 01/09/2025
Posted 1 week ago
0 years
0 Lacs
Gāndhīnagar
On-site
We seek strong candidates in any field of mathematics and statistics as well as in any interdisciplinary areas where innovative and principled use of statistics and/or probability is of vital importance. Candidates must have a Ph.D. in Statistics or Probability theory from a reputed institution, and a good research record and background. The following sub-areas are currently of special interest: Optimization Technique, Mathematical Modeling and Simulation, Probability and Random Processes Causal Inference, High-dimensional Statistics, Statistics of Networks, Bayesian Inference, Bayesian Computation, Missing Data Analysis, Imputation Methodology, Applied Stochastic Processes, Statistical Modelling of Spatio-temporal Data, Analysis of complex observational data Minimum Eligibility Criteria (all disciplines except design area candidates) ( i ) Ph. D. with a first class or equivalent in the preceding degree and an excellent academic record throughout; and ( ii ) A strong research record with publications in reputed journals and conferences. Associate Professor A minimum of six years post-Ph.D. teaching/research/professional experience of which at least three years should be at the level of Assistant Professor at higher educational institutions. A strong research record as evidenced by publications, external research grants /projects, and experience in doctoral supervision is expected. Application Submission Process Prospective candidates should send an email to dean_faculty@daiict.ac.in with Subject as “Faculty position in Disciplines/Areas (e.g. Computer Science, Humanities & Social Sciences)". Please attach the following to your email: (1) CV with details about your education starting 12th standard board exams (mention marks/CGPA, year of passing, specialization if any), work experience, and publications. Please provide names of three references who may be contacted for a letter of reference in support of your candidature. (2) A research statement giving research background, research outcomes, and future research plans. (3) A teaching statement giving teaching methodology, teaching experience, foundation/core courses you would like to teach, and elective courses you would like to teach. Faculty will be responsible for conducting independent research within their respective fields and teaching both undergraduate and postgraduate courses. Candidates with interdisciplinary expertise are strongly encouraged to apply. They will play an important role in contributing to the Institute’s mission through their teaching, research, and participation in various institutional activities. We encourage candidates to visit the Institute website for more information about the courses and research groups, in particular, the Faculty page, to get a sense of the faculty profile .
Posted 1 week ago
0 years
0 Lacs
Gāndhīnagar
On-site
We seek strong candidates in any field of mathematics and statistics as well as in any interdisciplinary areas where innovative and principled use of statistics and/or probability is of vital importance. Candidates must have a Ph.D. in Statistics or Probability theory from a reputed institution, and a good research record and background. The following sub-areas are currently of special interest: Optimization Technique, Mathematical Modeling and Simulation, Probability and Random Processes Causal Inference, High-dimensional Statistics, Statistics of Networks, Bayesian Inference, Bayesian Computation, Missing Data Analysis, Imputation Methodology, Applied Stochastic Processes, Statistical Modelling of Spatio-temporal Data, Analysis of complex observational data Minimum Eligibility Criteria (all disciplines except design area candidates) ( i ) Ph. D. with a first class or equivalent in the preceding degree and an excellent academic record throughout; and ( ii ) A strong research record with publications in reputed journals and conferences. Assistant Professor Ph.D. with strong research capabilities and a strong passion for teaching at undergraduate and postgraduate levels. Postdoctoral experience is preferred. Application Submission Process Prospective candidates should send an email to dean_faculty@daiict.ac.in with Subject as “Faculty position in Disciplines/Areas (e.g. Computer Science, Humanities & Social Sciences)". Please attach the following to your email: (1) CV with details about your education starting 12th standard board exams (mention marks/CGPA, year of passing, specialization if any), work experience, and publications. Please provide names of three references who may be contacted for a letter of reference in support of your candidature. (2) A research statement giving research background, research outcomes, and future research plans. (3) A teaching statement giving teaching methodology, teaching experience, foundation/core courses you would like to teach, and elective courses you would like to teach. Faculty will be responsible for conducting independent research within their respective fields and teaching both undergraduate and postgraduate courses. Candidates with interdisciplinary expertise are strongly encouraged to apply. They will play an important role in contributing to the Institute’s mission through their teaching, research, and participation in various institutional activities. We encourage candidates to visit the Institute website for more information about the courses and research groups, in particular, the Faculty page, to get a sense of the faculty profile .
Posted 1 week ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Company: Qualcomm India Private Limited Job Area: Engineering Group, Engineering Group > Software Engineering General Summary: As a leading technology innovator, Qualcomm pushes the boundaries of what's possible to enable next-generation experiences and drives digital transformation to help create a smarter, connected future for all. As a Qualcomm Software Engineer, you will design, develop, create, modify, and validate embedded and cloud edge software, applications, and/or specialized utility programs that launch cutting-edge, world class products that meet and exceed customer needs. Qualcomm Software Engineers collaborate with systems, hardware, architecture, test engineers, and other teams to design system-level software solutions and obtain information on performance requirements and interfaces. Minimum Qualifications: Bachelor's degree in Engineering, Information Systems, Computer Science, or related field. Job Location: Hyderabad More Details Below: About the team: Join the growing team at Qualcomm focused on advancing state-of-the-art in Machine Learning. The team uses Qualcomm chips’ extensive heterogeneous computing capabilities to allow inference of trained neural networks on-device without a need for connection to the cloud. Our inference engine is designed to help developers run neural network models trained in a variety of frameworks on Snapdragon platforms at blazing speeds while still sipping the smallest amount of power. See your work directly impact billions of devices around the world. Job Title: CMake Build System Engineer, Staff Job Summary: We are seeking a skilled and detail-oriented CMake Build System Engineer to join our team. In this role, you will be responsible for designing, maintaining, and optimizing CMake-based build systems for complex software projects that support cross-compilation, real-time operating systems (RTOS), and hardware-specific toolchains. You will work closely with developers, DevOps, and QA teams to ensure efficient and reliable builds across multiple platforms. Key Responsibilities: Design, implement, and maintain robust CMake build scripts for cross-platform software projects targeting microcontrollers and SoCs.. Maintain and improve build scripts, tools, and infrastructure - Refactor and modernize existing build systems to improve performance, maintainability, and scalability. Optimize build performance - Improve the speed and efficiency of the build process by optimizing CMake configurations and build strategies. Support cross-compilation workflows using custom toolchains and hardware abstraction layers. Integrate third-party libraries and manage dependencies using CMake best practices. Collaborate with development teams to support CI/CD pipelines and automate build processes. Troubleshoot and resolve build-related issues across various environments (Linux, Windows, macOS) and embedded platforms (ARM Cortex-M/R/A, RISC-V, etc.).. Ensure compatibility across various operating systems (Linux, Windows, macOS). Document build processes and provide training/support to other engineers as needed. Minimum Qualifications: Bachelor’s degree in engineering, Computer Science, or related field and 10+ years of Systems Engineering or related work experience. OR Master’s degree in engineering, Computer Science, or related field and 9+ years of Systems Engineering or related work experience. Required Qualifications: Strong experience with CMake in large-scale C++ or multi-language projects. Understanding of native build systems (like Make, Ninja) and how CMake interacts with them. Proficiency in C++, Python, or other scripting languages used in build automation. Solid understanding of software build systems, compilers, and linkers and embedded toolchains (e.g., GCC for ARM, IAR, Keil, Clang). Experience with cross-compilation, toolchains (e.g. GCC, LLVM), and multi-platform builds (x86, ARM, RISC-V etc). Familiarity with CI/CD tools such as Jenkins, GitHub Actions, GitLab CI, or similar. Knowledge of software development best practices, including version control, testing, and code review. Preferred Qualifications: Experience with conan, vcpkg, or other C++ package managers. Knowledge of embedded systems or real-time operating systems (RTOS). Familiarity with Docker and containerized build environments. Contributions to open-source CMake projects or tools. Soft Skills: Strong problem-solving and analytical skills. Excellent communication and collaboration abilities. Self-motivated and able to work independently or as part of a team. Applicants : Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail disability-accomodations@qualcomm.com or call Qualcomm's toll-free number found here. Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. (Keep in mind that this email address is used to provide reasonable accommodations for individuals with disabilities. We will not respond here to requests for updates on applications or resume inquiries). Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies : Our Careers Site is only for individuals seeking a job at Qualcomm. Staffing and recruiting agencies and individuals being represented by an agency are not authorized to use this site or to submit profiles, applications or resumes, and any such submissions will be considered unsolicited. Qualcomm does not accept unsolicited resumes or applications from agencies. Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers. 3075194
Posted 1 week ago
6.0 years
0 Lacs
India
Remote
Location: Remote / Hybrid Experience: 2–6 years (or strong project/internship experience) Employment Type: Full-Time Department: AI & Software Systems Key Responsibilities Design and maintain end-to-end MLOps pipelines: from data ingestion to model deployment and monitoring. Containerize ML models and services using Docker for scalable deployment. Develop and deploy APIs using FastAPI to serve real-time inference for object detection, segmentation, and mapping tasks. Automate workflows using CI/CD tools like GitHub Actions or Jenkins. Manage cloud infrastructure on AWS: EC2, S3, Lambda, SageMaker, CloudWatch, etc. Collaborate with AI and GIS teams to integrate ML outputs into mapping dashboards. Implement model versioning using DVC/Git, and maintain structured experiment tracking using MLflow or Weights & Biases. Ensure secure, scalable, and cost-efficient model hosting and API access. Required Skills Programming: Python (must), Bash/Shell scripting ML Frameworks: PyTorch, TensorFlow, OpenCV MLOps Tools: MLflow, DVC, GitHub Actions, Docker (must), Kubernetes (preferred) Cloud Platforms: AWS (EC2, S3, SageMaker, IAM, Lambda) API Development: FastAPI (must), Flask (optional) Data Handling: NumPy, Pandas, GDAL, Rasterio Monitoring: Prometheus, Grafana, AWS CloudWatch Preferred Experience Hands-on with AI/ML models for image segmentation, object detection (YOLOv8, U-Net, Mask R-CNN). Experience with geospatial datasets (satellite imagery, drone footage, LiDAR). Familiarity with PostGIS, QGIS, or spatial database management. Exposure to DevOps principles and container orchestration (Kubernetes/EKS). Soft Skills Problem-solving mindset with a system design approach. Clear communication across AI, software, and domain teams. Ownership of the full AI deployment lifecycle. Education Bachelor’s or Master’s in Computer Science, Data Science, AI, or equivalent. Certifications in AWS, MLOps, or Docker/Kubernetes (bonus).
Posted 1 week ago
12.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Location: Gurgaon Engagement: Full-Time Company: OneOrg.ai – AI Superbrain for SME's The Opportunity OneOrg.ai is building the AI Superbrain For SME's — a GenAI-powered platform that unifies knowledge across Sales, HR, Compliance, Ops, and Procurement. We’re building a thinking layer that helps CEO's CXOs ask better questions, make sharper decisions, and unlock their teams' potential. \We’re looking for a Full Time CTO — a hands on technical leader who can sharpen our stack, guide key decisions, and help us scale intelligently . What You’ll Lead Evaluate and evolve our GenAI architecture (LLM orchestration, RAG, vector DBs, APIs) Guide decisions on model tuning, inference setup, and toolchain optimization Shape the tech roadmap as we scale into use cases like procurement intelligence , pharma sales , and CXO copilots Be the technical braintrust to support a lean internal dev team Ensure best practices on data privacy, latency, modularity , and enterprise-grade security What You’ll Do Strategize product development and its tech stack Lead client conversations for a 0-1 scale Lead product-tech huddles with the founder + tech team Review architecture, model choices, and product scalability Guide productization of AI use cases (structured actions, chat interfaces, pipelines) Advise on hiring, infra, dev vendor contracts (as needed) Be the technical conscience — challenging our decisions when needed and accelerating them when it matters Ideal Profile 7–12 years in hands on backend & AI product engineering Strong grasp of LLMs, RAG , and tools like Hugging Face, LangChain, vLLM, or LlamaIndex, Mixtrail Experience deploying or scaling AI/ML systems (open-source or hosted) Familiarity with CRM/ERP-like B2B tools a plus Thrives in 0→1 stage environments with limited structure but high ownership Bonus: You’ve advised or supported early-stage GenAI startups Have built scaled product solutions for SMB's How to Express Interest Email vivek.slaria@originbluy.com Subject: Fractional CTO – OneOrg.ai | Your Name No CVs. Just send: A note on why this excites you A link to your GitHub, portfolio, or any recent product you helped build
Posted 1 week ago
8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About Netskope Today, there's more data and users outside the enterprise than inside, causing the network perimeter as we know it to dissolve. We realized a new perimeter was needed, one that is built in the cloud and follows and protects data wherever it goes, so we started Netskope to redefine Cloud, Network and Data Security. Since 2012, we have built the market-leading cloud security company and an award-winning culture powered by hundreds of employees spread across offices in Santa Clara, St. Louis, Bangalore, London, Paris, Melbourne, Taipei, and Tokyo. Our core values are openness, honesty, and transparency, and we purposely developed our open desk layouts and large meeting spaces to support and promote partnerships, collaboration, and teamwork. From catered lunches and office celebrations to employee recognition events and social professional groups such as the Awesome Women of Netskope (AWON), we strive to keep work fun, supportive and interactive. Visit us at Netskope Careers. Please follow us on LinkedIn and Twitter@Netskope. About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data QE team is responsible for the quality of data, data services, and data components across our cloud and hybrid cloud environments. We develop tools, create fully automated regression suites and conduct performance tests for distributed data components at cloud scale. If you thrive on solving difficult problems, complex test scenarios, and developing high-performance QE tooling and automation, we would love to discuss our career opportunities with you. We are currently seeking a highly skilled Staff / Sr. Staff SDET to lead initiatives at the intersection of AI/ML, cloud security, data engineering , and network security . This role involves testing the product, data efficacy, building CI/CD pipelines, and automating tools and tests to deliver high quality AI-driven solutions to detect and mitigate cybersecurity threats in cloud environments. You will collaborate with cross-functional teams to deliver secure, efficient, innovative solutions addressing complex security challenges. What’s In It For You You will be part of a growing team of renowned industry experts in the exciting space of AI/ML, Data Engineering and cloud / network security Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting problems, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Leading the test documentation, automation and deployment tooling of the microservice based architecture. Developing, executing, and maintaining manual and automated test cases for AI and infrastructure components. Designing and implementing test plans for backward compatibility, REST APIs and integration testing. Writing and maintaining test documentation to ensure transparency and repeatability of test processes. Automating testing using Python, Java or equivalent programming language to improve efficiency and coverage. Troubleshooting, debugging, and optimizing test environments in Kubernetes-based ecosystems. Collaborating with cross-functional teams, including data scientists, AI engineers, and DevOps, to ensure software quality. Staying up-to-date with the latest advancements in AI testing methodologies and tooling. Required Skills And Experience 8+ years of experience Strong knowledge of automation and building automation frameworks Excellent knowledge of and coding skills with Python, Java, etc Good knowledge of microservice architecture, distributed systems and familiarity with k8s and docker Strong knowledge of building CI/CD pipelines with Jenkins or Github Actions Experience with cloud platforms like AWS, Azure or GCP Experience testing data pipelines for large scale data processing and testing data efficacy or data correctness. Familiarity with RAG systems, vector databases such as Pincone, PGVector, and LLMs to validate data pipelines and inference results. Familiarity with AI models or conceptual knowledge of AI models and prompt engineering. Familiarity with relational and non-relational databases, like ClickHouse and BigQuery. A proven ability to lead cross-functional teams and mentor more junior engineers. Excellent written and verbal communication skills and the ability to present complex technical concepts to stakeholders. Education BSCS or equivalent required, MSCS or equivalent strongly preferred Netskope is committed to implementing equal employment opportunities for all employees and applicants for employment. Netskope does not discriminate in employment opportunities or practices based on religion, race, color, sex, marital or veteran statues, age, national origin, ancestry, physical or mental disability, medical condition, sexual orientation, gender identity/expression, genetic information, pregnancy (including childbirth, lactation and related medical conditions), or any other characteristic protected by the laws or regulations of any jurisdiction in which we operate. Netskope respects your privacy and is committed to protecting the personal information you share with us, please refer to Netskope's Privacy Policy for more details.
Posted 2 weeks ago
0 years
0 Lacs
India
Remote
We run a recruitment agency and help our clients hire top 1% talent. Our client partners with the world’s leading AI research labs to support them with expert talent that helps train cutting-edge language models used in real-world applications. Position: Statistics PhDs – AI Trainer Type: Hourly Contract (Remote) Compensation: $50–$70/hour (based on expertise) Location: Remote, asynchronous, flexible hours Duration: 10–20 hrs/week (extendable to 40 hrs/week) | Minimum 1–2 months Responsibilities Develop advanced statistical problems and tasks to evaluate AI model performance. Assess LLM-generated outputs for correctness, logical rigor, and statistical accuracy. Provide structured feedback to improve model understanding of probability, inference, regression, and related concepts. Collaborate with AI researchers via private Slack channels to refine datasets and benchmarks. Contribute to the creation of domain-specific evaluation sets aligned with modern statistical practices. Requirements PhD in Statistics (or closely related field) from a top university. Undergraduate or graduate education completed Exceptional written and verbal communication skills with a strong command of English. High attention to detail and analytical precision. Ability to work independently in a remote and asynchronous environment. Application & Screening Process Submit your resume and complete a short form. A 20–30 minute screening process involving a quick interview and evaluation of your expertise. Applications reviewed on a rolling basis; early applications encouraged.
Posted 2 weeks ago
20.0 years
0 Lacs
India
Remote
Company Description Svitla Systems, Inc. is a global digital solutions company with over 20 years of experience, crafting more than 5,000 transformative solutions for clients worldwide. Our mission is to leverage digital, cloud, data, and intelligent technologies to create sustainable solutions for our clients, enhancing their growth and competitive edge. With a diverse team of over 1,000 technology professionals, Svitla serves a range of clients from innovative startups to Fortune 500 companies across 20+ industries. Svitla operates from 10 delivery centers globally, specializing in areas like cloud migration, data analytics, web and mobile development, and more. We are proud to be a WBENC-certified business and one of the largest, fastest-growing women-owned IT companies in the US. Role Description This is a fully remote, full-time, long-term contractual position with one of our clients who is building the next generation of secure, real-time proctoring solutions for high-stakes exams. We’re looking for a Senior ML/AI Engineer to architect, implement, and maintain Azure-based AI models that power speech-to-text, computer vision, identity verification, and intelligent chat features during exam sessions. Responsibilities - Implement real-time speech-to-text transcription and audio-quality analysis using Azure AI Speech. - Build prohibited-item detection, OCR, and face-analysis pipelines with Azure AI Vision. - Integrate Azure Bot Service for rule-based, intelligent chat support. - Collaborate with our DevOps Engineer on CI/CD and infrastructure-as-code for AI model deployment. - Train, evaluate, and deploy object-detection models (e.g., screen-reflection, background faces, ID checks) using Azure Custom Vision. - Develop and maintain skeletal-tracking models (OpenPose/MediaPipe) for gaze-anomaly detection. - Fine-tune Azure Face API for ID-to-headshot matching at session start and continuous identity validation. - Expose inference results via REST APIs in partnership with backend developers to drive real-time proctor dashboards and post-session reports. - Monitor model performance, manage versioning/retraining workflows, and optimize accuracy for edge-case scenarios. Qualifications - Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, or related field. - 5+ years of professional ML/AI experience, with at least 2 years working on production-grade Azure Cognitive Services. - Strong Python skills plus 3+ years with TensorFlow or PyTorch. - Hands-on experience (1–2 years) with: - Azure AI Speech (speech-to-text, audio analysis) - Azure AI Vision (object detection, OCR, face analysis) - Azure Custom Vision model training and deployment - Azure Face API fine-tuning and biometric matching - Azure Bot Service integration - Solid understanding of CI/CD practices and tools (Azure DevOps, Docker, Kubernetes) with 2+ years of collaboration on AI model deployments. - 2+ years building and consuming RESTful or gRPC APIs for AI inference. - Proven track record of monitoring and optimizing model performance in production. Good to haves - 1+ year with skeletal-tracking frameworks (OpenPose, MediaPipe). - Familiarity with Azure ML Studio, ML pipelines, or MLflow for model versioning and retraining. - Experience with edge-deployment frameworks (TensorFlow Lite, ONNX Runtime). - Background in security and compliance for biometric data (GDPR, PCI-DSS). - Azure AI Engineer Associate or Azure Data Scientist Associate certification. Additional Information -The role is a Fully Remote, Full-Time, Long-Term, Contractual position -The hiring process includes an initial screening by the recruitment team, an HR motivation interview, an internal tech screening, a client technical interview and finally the client management interview -The salary range for this position is 50-70 LPA (INR) -The position needs to be filled on priority, only candidates who have an official notice period (or remaining time on their notice period) of <=30 days will be screened
Posted 2 weeks ago
6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
What You’ll Work On 1. Deep Learning & Computer Vision Train models for image classification: binary/multi-class using CNNs, EfficientNet, or custom backbones. Implement object detection using YOLOv5, Faster R-CNN, SSD; tune NMS and anchor boxes for medical contexts. Work with semantic segmentation models (UNet, DeepLabV3+) for region-level diagnostics (e.g., cell, lesion, or nucleus boundaries). Apply instance segmentation (e.g., Mask R-CNN) for microscopy image cell separation. Use super-resolution and denoising networks (SRCNN, Real-ESRGAN) to enhance low-quality inputs. Develop temporal comparison pipelines for changes across image sequences (e.g., disease progression). Leverage data augmentation libraries (Albumentations, imgaug) for low-data domains. 2. Vision-Language Models (VLMs) Fine-tune CLIP, BLIP, LLaVA, GPT-4V to generate explanations, labels, or descriptions from images. Build image captioning models (Show-Attend-Tell, Transformer-based) using paired datasets. Train or use VQA pipelines for image-question-answer triples. Align text and image embeddings with contrastive loss (InfoNCE), cosine similarity, or projection heads. Design prompt-based pipelines for zero-shot visual understanding. Evaluate using metrics like BLEU, CIDEr, SPICE, Recall@K, etc. 3. Model Training, Evaluation & Interpretation Use PyTorch (core), with support from HuggingFace, torchvision, timm, Lightning. Track model performance with TensorBoard, Weights & Biases, MLflow. Implement cross-validation, early stopping, LR schedulers, warm restarts. Visualize model internals using GradCAM, SHAP, Attention rollout, etc. Evaluate metrics: • Classification: Accuracy, ROC-AUC, F1 • Segmentation: IoU, Dice Coefficient • Detection: mAP • Captioning/VQA: BLEU, METEOR 4. Optimization & Deployment Convert models to ONNX, TorchScript, or TFLite for portable inference. Apply quantization-aware training, post-training quantization, and pruning. Optimize for low-power inference using TensorRT or OpenVINO. Build multi-threaded or asynchronous pipelines for batched inference. 5. Edge & Real-Time Systems Deploy models on Jetson Nano/Xavier, Coral TPU. Handle real-time camera inputs using OpenCV, GStreamer and apply streaming inference. Handle multiple camera/image feeds for simultaneous diagnostics. 6. Regulatory-Ready AI Development Maintain model lineage, performance logs, and validation trails for 21 CFR Part 11 and ISO 13485 readiness. Contribute to validation reports, IQ/OQ/PQ, and reproducibility documentation. Write SOPs and datasheets to support clinical validation of AI components. 7. DevOps, CI/CD & MLOps Use Azure Boards + DevOps Pipelines (YAML) to: Track sprints • Assign tasks • Maintain epics & user stories • Trigger auto-validation pipelines (lint, unit tests, inference validation) on code push • Integrate MLflow or custom logs for model lifecycle tracking. • Use GitHub Actions for cross-platform model validation across environments. 8. Bonus Skills (Preferred but Not Mandatory) Experience in microscopy or pathology data (TIFF, NDPI, DICOM formats). Knowledge of OCR + CV hybrid pipelines for slide/dataset annotation. Experience with streamlit, Gradio, or Flask for AI UX prototyping. Understanding of active learning or semi-supervised learning in low-label settings. Exposure to research publishing, IP filing, or open-source contributions. 9. Required Background 4–6 years in applied deep learning (post academia) Strong foundation in: Python + PyTorch CV workflows (classification, detection, segmentation) Transformer architectures & attention VLMs or multimodal learning Bachelor’s or Master’s degree in CS, AI, EE, Biomedical Engg, or related field 10. How to Apply Send the following to info@sciverse.co.in Subject: Application – AI Research Engineer (4–8 Yrs, CV + VLM) Include: • Your updated CV • GitHub / Portfolio • Short write-up on a model or pipeline you built and why you’re proud of it OR apply directly via LinkedIn — but email applications get faster visibility. Let’s build AI that sees, understands, and impacts lives.
Posted 2 weeks ago
2.0 - 5.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Position: Machine Learning Scientist Location: Onsite(Mumbai) Employment Type: Full-Time Experience Required: 2 to 5 years About Nurdd Nurdd is an advanced marketing attribution platform purpose-built for the influencer economy. We help brands move beyond vanity metrics to truly understand which creators, content, and channels drive conversions. Our attribution engine leverages machine learning and real-time data to power smarter campaign decisions and direct brand-creator collaboration. As we scale our AI infrastructure, we are looking for a Machine Learning Scientist to develop models that enhance performance visibility, detect patterns in creator behavior, and drive predictive insights for marketing outcomes. Role Overview As a Machine Learning Scientist at Nurdd, you will play a key role in building the intelligence that powers our platform. You'll work closely with cross-functional teams to design and deploy ML models that enhance product functionality, drive decision-making, and deliver measurable impact. This role requires a strong foundation in applied machine learning and the ability to translate complex data into actionable insights. Responsibilities Develop machine learning models for creator scoring, performance attribution, and predictive analytics Work with data from multiple sources (social platforms, internal app events, web analytics, etc.) Collaborate with engineering teams to productionize ML pipelines and integrate outputs via APIs Build experimentation frameworks for A/B testing and model validation Continuously evaluate and improve model performance, scalability, and reliability Stay current with research in machine learning, marketing science, and attribution modeling Requirements 2 to 5 years of experience in applied machine learning or data science roles Strong foundation in supervised/unsupervised learning, classification, regression, and clustering Proficiency in Python and libraries such as scikit-learn, TensorFlow, or PyTorch Experience working with large-scale, noisy, or unstructured datasets Familiarity with ML Ops workflows and deployment on cloud platforms (AWS, GCP, etc.) Understanding of experimentation, model explainability, and performance metrics Excellent communication skills for cross-functional collaboration Preferred Skills Experience with recommendation systems or ranking models Familiarity with influencer marketing, ad tech, or consumer behavior modeling Knowledge of time-series analysis and causal inference techniques Exposure to Spark, Airflow, or similar data pipeline tools Why Work With Us Build AI models that power real-time decisions for brands and creators Solve meaningful problems at the intersection of marketing, data, and human behavior Work in a lean, fast-paced environment with high technical ownership Be part of a mission-driven product shaping the future of performance marketing
Posted 2 weeks ago
58.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description We are seeking a high-impact AI/ML Engineer to lead the design, development, and deployment of machine learning and AI solutions across vision, audio, and language modalities. You'll be part of a fast-paced, outcome-oriented AI & Analytics team, working alongside data scientists, engineers, and product leaders to transform business use cases into real-time, scalable AI systems. Responsibilities This role demands strong technical leadership, a product mindset, and hands-on expertise in Computer Vision, Audio Intelligence, and Deep Responsibilities : Architect, develop, and deploy ML models for multimodal problems, including vision (image/video), audio (speech/sound), and NLP tasks. Own the complete ML lifecycle: data ingestion, model development, experimentation, evaluation, deployment, and monitoring. Leverage transfer learning, foundation models, or self-supervised approaches where suitable. Design and implement scalable training pipelines and inference APIs using frameworks like PyTorch or TensorFlow. Collaborate with MLOps, data engineering, and DevOps to productionize models using Docker, Kubernetes, or serverless infrastructure. Continuously monitor model performance and implement retraining workflows to ensure accuracy over time. Stay ahead of the curve on cutting-edge AI research (e.g., generative AI, video understanding, audio embeddings) and incorporate innovations into production systems. Write clean, well-documented, and reusable code to support agile experimentation and long-term platform : Bachelors or Masters degree in Computer Science, Artificial Intelligence, Data Science, or a related field. 58 years of experience in AI/ML Engineering, with at least 3 years in applied deep learning. Technical Skills Languages : Expert in Python; good knowledge of R or Java is a plus. ML/DL Frameworks : Proficient with PyTorch, TensorFlow, Scikit-learn, ONNX. Computer Vision : Image classification, object detection, OCR, segmentation, tracking (YOLO, Detectron2, OpenCV, MediaPipe). Audio AI : Speech recognition (ASR), sound classification, audio embedding models (Wav2Vec2, Whisper, etc. Data Engineering : Strong with Pandas, NumPy, SQL, and preprocessing pipelines for structured and unstructured data. NLP/LLMs : Working knowledge of Transformers, BERT/LLAMA, Hugging Face ecosystem is preferred. Cloud & MLOps : Experience with AWS/GCP/Azure, MLFlow, SageMaker, Vertex AI, or Azure ML. Deployment & Infrastructure : Experience with Docker, Kubernetes, REST APIs, serverless ML inference. CI/CD & Version Control : Git, DVC, ML pipelines, Jenkins, Airflow, etc. Soft Skills & Competencies Strong analytical and systems thinking; able to break down business problems into ML components. Excellent communication skills able to explain models, results, and decisions to non-technical stakeholders. Proven ability to work cross-functionally with designers, engineers, product managers, and analysts. Demonstrated bias for action, rapid experimentation, and iterative delivery of impact. (ref:hirist.tech)
Posted 2 weeks ago
1.0 years
0 Lacs
Cuttack, Odisha, India
On-site
Company Description AIRA is India’s first emotionally intelligent AI voice companion designed to support young people through loneliness, stress, and emotional burnout. We’re building a real-time, voice-based relationship experience using LLMs, TTS, and STT. AIRA is not just a chatbot she’s a presence This is your chance to join the founding tech team of a high-impact emotional AI startup where your code directly touches hearts. Role Description We're hiring a Kotlin Android Developer to join the core team of AIRA – India’s First Emotionally Intelligent AI Companion App. This role is strictly focused on Android development with Kotlin, and requires strong hands-on experience in AI/ML integration – including speech-to-text (STT), text-to-speech (TTS), and on-device inference. You'll work closely on building the mobile MVP from scratch and must be passionate about emotional AI, scalable architecture, and clean, efficient code. Please do not apply if you're looking for web, or UI/UX roles – this is a focused mobile-AI position only. Qualifications Proven experience in Android app development using Kotlin (minimum 1 year preferred, personal or freelance projects acceptable). Strong understanding of AI/ML integration in mobile apps (e.g., Whisper, TTS engines, LLM APIs, TensorFlow Lite, ONNX). Familiarity with REST APIs, local storage, Jetpack components, and multi-threading in Android. Prior work with voice-based apps, chatbots, or AI companions is a major plus. Ability to work independently and deliver production-ready code under tight timelines. Basic knowledge of Git, GitHub, and agile workflows. A passion for emotional AI, clean UI/UX, and scalable performance on low-end Android devices.
Posted 2 weeks ago
0 years
0 Lacs
India
Remote
Career Renew is recruiting for one of its clients a Senior Full Stack Engineer - AI - this is a fully remote role and candidates can be based anywhere, as long as they can overlap 4 hours with EST Time. We are a consumer AI studio, which has shipped and grown multiple products in the past to millions of users. We are currently looking for a Senior Engineering lead to join our team for our next platform around AI Agents. Requirements: You have experience in end to end application development - full stack development, front end, backend, infrastructure and ops across multiple stacks (Typescript, Javascript, MERN) You have shipped products and worked with product teams towards design, UX You are hands on and write robust code You have some understanding of AI agent systems You have good working proficiency in English, especially written You have extraordinary problem solving skills, particularly in debugging complex multi-agent interactions and emergent behaviors Bonus: You are a graduate of a top tier university You have worked on a well known product Strong proficiency in programming languages such as JavaScript, Go, and Rust. Demonstrable skills in both Backend and Frontend development at large scale, such as React, Tailwind, Node, Redis, MongoDB, and Docker Strong proficiency in AWS cloud infrastructure at scale, with Demonstrable skills in cost management. Strong proficiency in AI LLM Inference, and AI Model fine-tuning. You have deep experience building and scaling AI agent infrastructure, including: Multi-agent orchestration systems and workflow management Agent reasoning frameworks, decision trees, and planning algorithms Tool integration architectures and function calling systems Agent memory management, context persistence, and state handling Real-time agent coordination and communication protocols You have experience in AI training, fine-tuning inference and deployment - personal projects count You have built or contributed to open-source agent frameworks, LLM orchestration tools, or autonomous system libraries You have experience with reinforcement learning, particularly in multi-agent environments You understand the theoretical foundations of agent architectures (ReAct, Chain-of-Thought, Tree of Thoughts, etc.) You have a network of highly talented engineers and can grow into a leadership role You are strongly opinionated about the future of technology. You are not just an executor but have an independent vision Benefits: Paid Time Off (Vacation, Sick & Public Holidays) Family Leave (Maternity, Paternity) Work From Home Stock Option Plan Compensation & Package Health Care Plan (Medical, Dental & Vision) Paid Time Off (Vacation, Sick & Public Holidays) Family Leave (Maternity, Paternity) Work From Home Stock Option Plan
Posted 2 weeks ago
8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Description Sales, Marketing and Global Services (SMGS) AWS Sales, Marketing, and Global Services (SMGS) is responsible for driving revenue, adoption, and growth from the largest and fastest growing small- and mid-market accounts to enterprise-level customers including public sector. Do you like startups? Are you interested in Cloud Computing & Generative AI? Yes? We have a role you might find interesting. Startups are the large enterprises of the future. These young companies are founded by ambitious people who have a desire to build something meaningful and to challenge the status quo. To address underserved customers, or to challenge incumbents. They usually operate in an environment of scarcity: whether that’s capital, engineering resource, or experience. This is where you come in. The Startup Solutions Architecture team is dedicated to working with these early stage startup companies as they build their businesses. We’re here to make sure that they can deploy the best, most scalable, and most secure architectures possible – and that they spend as little time and money as possible doing so. We are looking for technical builders who love the idea of working with early stage startups to help them as they grow. In this role, you’ll work directly with a variety of interesting customers and help them make the best (and sometimes the most pragmatic) technical decisions along the way. You’ll have a chance to build enduring relationships with these companies and establish yourself as a trusted advisor. As well as spending time working directly with customers, you’ll also get plenty of time to “sharpen the saw” and keep your skills fresh. We have more than 175 services across a range of different categories and it’s important that we can help startups take advantages of the right ones. You’ll also play an important role as an advocate with our product teams to make sure we are building the right products for the startups you work with. And for the customers you don’t get to work with on a 1:1 basis you’ll get the chance to share your knowledge more broadly by working on technical content and presenting at events. A day in the life You’re surrounded by innovation. You’re empowered with a lot of ownership. Your growth is accelerated. The work is challenging. You have a voice here and are encouraged to use it. Your experience and career development is in your hands. We live our leadership principles every day. At Amazon, it's always "Day 1". Diverse Experiences Amazon values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship and Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Basic Qualifications 8+ years of specific technology domain areas (e.g. software development, cloud computing, systems engineering, infrastructure, security, networking, data & analytics) experience 3+ years of design, implementation, or consulting in applications and infrastructures experience 10+ years of IT development or implementation/consulting in the software or Internet industries experience Preferred Qualifications Experience in developing and deploying large scale machine learning or deep learning models and/or systems into production, including batch and real-time data processing Experience scaling model training and inference using technologies like Slurm, ParallelCluster, Amazon SageMaker Hands-on experience benchmarking and optimizing performance of models on accelerated computing (GPU, TPU, AI ASICs) clusters with high-speed networking. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - AWS India - Delhi Job ID: A3017037
Posted 2 weeks ago
3.0 years
0 Lacs
India
On-site
At Yosemite Crew ( https://github.com/YosemiteCrew/Yosemite-Crew ) , we're creating an open, modern, and compassionate Practice Management System (PMS) designed with and for pet businesses, pet owners and developers. We're not just building software, we're building a community-powered platform that brings transparency, usability, and trust back into the tools vets rely on every day. We’re hiring a Frontend Developer to lead the development of mobile apps that integrate deeply with AI systems — from LLMs to on-device inference and intelligent user interactions. You’ll own the mobile layer end-to-end using Expo and Bare Workflow , while also collaborating with the AI/backend team to embed and optimize smart features into the app. Key Responsibilities: Build and maintain high-quality React Native apps (Expo + Bare) for iOS and Android Integrate AI models , APIs, and services into mobile apps (e.g., OpenAI, Hugging Face, custom endpoints) Design intelligent UI/UX flows based on AI-generated content or actions Collaborate with AI engineers to run model inference efficiently (cloud or on-device) Connect mobile apps with backend services via REST , WebSockets , or GraphQL Handle app lifecycle: development, debugging, testing, deployment to App Stores Required Skills & Qualifications: 1–3 years experience with React Native in both Expo and Bare workflows Solid grasp of JavaScript and TypeScript Experience integrating AI APIs (e.g., OpenAI , Replicate , custom LLM endpoints ) Ability to design and render dynamic UI based on AI responses Familiarity with OTA updates (EAS Update, CodePush) and deployment pipelines Comfortable with mobile app store requirements, permissions, push notifications Understanding of mobile-specific challenges around AI latency, streaming, and user experience Nice to Have: Backend development experience ( Node.js , Express , API creation) Experience working with on-device inference (e.g., TensorFlow Lite, Core ML) Familiarity with vector stores, RAG, or LLM orchestration tools (e.g., LangChain) Native module development in Swift , Kotlin , or bridging native SDKs Working knowledge of real-time communication (sockets, Firebase, etc.) Contributions to open source projects and prior startup experience Why join Yosemite Crew ? Help build something vets actually love using with ownership and long-term impact. Work with a small, values-driven team that truly listens. Shape a mission-led, community-first culture from day one. Get access to an expanding network of pet startups, tech leaders, and creators giving back to the industry.
Posted 2 weeks ago
6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Siemens Digital Industries Software is a leading provider of solutions for the design, simulation, and manufacture of products across many different industries. Formula 1 cars, skyscrapers, ships, space exploration vehicles, and many of the objects we see in our daily lives are being conceived and manufactured using our Product Lifecycle Management (PLM) software. We are seeking AI Backend Engineers to play a pivotal role in building our Agentic Workflow Service and Retrieval-Augmented Generation (RAG) Service. In this hybrid role, you'll leverage your expertise in both backend development and machine learning to create robust, scalable AI-powered systems using AWS Kubernetes, Amazon Bedrock models, AWS Strands Framework, and LangChain / LangGraph. Understanding of and expertise in: Design and implement core backend services and APIs for agentic framework and RAG systems LLM-based applications using Amazon Bedrock models RAG systems with advanced retrieval mechanisms and vector database integration Implement agentic workflows using technologies such as AWS Strands Framework, LangChain / LangGraph Design and develop microservices that efficiently integrate AI capabilities Create scalable data processing pipelines for training data and document ingestion Optimize model performance, inference latency, and overall system efficiency Implement evaluation metrics and monitoring for AI components Write clean, maintainable, and well-tested code with comprehensive documentation Collaborate with multiple cross-functional team members including DevOps, product, and frontend engineers Stay current with the latest advancements in LLMs and AI agent architectures Minimum Experience Requirements 6+ years of total software engineering experience Backend development experience with strong Python programming skills Experience in ML/AI engineering, particularly with LLMs and generative AI applications Experience with microservices architecture, API design, and asynchronous programming Demonstrated experience building RAG systems and working with vector databases LangChain/LangGraph or similar LLM orchestration frameworks Strong knowledge of AWS services, particularly Bedrock, Lambda, and container services Experience with containerization technologies and Kubernetes Understanding of ML model deployment, serving, and monitoring in production environments Knowledge of prompt engineering and LLM fine-tuning techniques Excellent problem-solving abilities and system design skills Strong communication skills and ability to explain complex technical concepts Experience in Kubernetes, AWS Serverless Experience in working with Databases (SQL, NoSQL) and data structures Ability to learn new technologies quickly Preferred Qualifications: Must have AWS certifications - Associate Architect / Developer / Data Engineer / AI Track Must have familiarity with streaming architectures and real-time data processing Must have experience with ML experiment tracking and model versioning Must have understanding of ML/AI ethics and responsible AI development Experience with AWS Strands Framework Knowledge of semantic search and embedding models Contributions to open-source ML/AI projects We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status. We are Siemens A collection of over 377,000 minds building the future, one day at a time in over 200 countries. We're dedicated to equality, and we welcome applications that reflect the diversity of the communities we work in. All employment decisions at Siemens are based on qualifications, merit, and business need. Bring your curiosity and creativity and help us shape tomorrow! We offer a comprehensive reward package which includes a competitive basic salary, bonus scheme, generous holiday allowance, pension, and private healthcare. Siemens Software. ‘Transform the everyday' , #SWSaaS
Posted 2 weeks ago
0.0 - 1.0 years
0 - 0 Lacs
Coimbatore, Tamil Nadu
Remote
At 360Watts, we’re rethinking what it means to own and control solar energy at home . We’re building an intelligent rooftop solar system that doesn’t just sit on your roof — it monitors, automates, and adapts , giving users full visibility and control over their energy flow. Are you a systems thinker who can build the bridge between physical and digital? A solar system with an IoT layer for automation — that will be modular and powered by AL/ML capabilities. Upgradable to user's needs from basic to advanced automation. Remote controlled by users with smart-home energy management app (EMS). >> Responsibilities Lead the design, testing and iteration of end-to-end IoT system architecture layer, powered by edge-MCUs and hybrid data flows to cloud + smart-home control hub Develop real-time firmware to read sensors, control relays, and implement safe, OTA-updatable logic. MCUs with simple to complex inference capabilities (such as ESP32-S3, Raspberry Pi CM4, Jetson Nano/Xavier NX) and maintain firmware modularity for upgrades. Define IoT use-cases, data workflow, communication protocol stacks (MODBUS RTU/TCP, MQTT) integration with inverter, battery system & cloud EMS Guide hardware intern with embedded prototyping from breadboard to PCB: wiring, testing, debugging with hardware intern and solar design engineer Collaboration with solar engineer + EMS software lead for rapid prototyping, field testing, and user-centric automation logic Drive field deployment readiness — from pilot configuration to relay switching stability, inverter integration, and offline fallback modes >> Must have Systems-oriented Embedded C/C++ Edge (AL/ML) architecture & modular firmware design Real-world firmware with control logic + sensor/relay integration Protocol stack implementation (MODBUS RTU/TCP, MQTT) OTA, structured data, and embedded fault handling System debugging and field-readiness >> Background Bachelor’s or Master's degree in Electrical Engineering, Electronics & Communication, Embedded Systems, or a related field 1-3 years of work experience Professional English language >> Job details Salary depending on skill, between Rs. 30k–50k per month Option for equity (ESOP) after 9-12 months Start date = 15.08.2025 (or) 01.09.2025 Probation = 3 months If you are excited, please apply soon. Job Type: Full-time Pay: ₹35,000.00 - ₹50,000.00 per month Benefits: Flexible schedule Paid time off Schedule: Monday to Friday Weekend availability Supplemental Pay: Performance bonus Ability to commute/relocate: Coimbatore, Tamil Nadu: Reliably commute or planning to relocate before starting work (Preferred) Education: Bachelor's (Preferred) Experience: Embedded software: 1 year (Required) Language: English (Preferred) Work Location: In person Speak with the employer +91 9087610051 Application Deadline: 27/07/2025 Expected Start Date: 01/09/2025
Posted 2 weeks ago
8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Description Are you excited about delighting millions of customers by driving the most relevant marketing initiatives? Do you thrive in a fast-moving, large-scale environment that values data-driven decision making and sound scientific practices? Amazon is seeking a Data Scientist . This team is focused on driving key priorities of a)core shopping that elevates the shopping CX for all shoppers in all lifecycle stages, b) developing ways to accelerate lifecycle progression and build foundational capabilities to address the shopper needs and c)Alternate shopping models We are looking for a Data Scientist to join our efforts to support the next generation of analytics systems for measuring consumer behavior using machine learning and econometrics at big data scale at Amazon. You will work machine learning and statistical algorithms across multiple platforms to harness enormous volumes of online data at scale to define customer facing products and measure customer responses to various marketing initiatives. The Data Scientist will be a technical player in a team working to build custom science solutions to drive new customers, engage existing customers and drive marketing efficiencies by leveraging approaches that optimize Amazon’s systems using cutting edge quantitative techniques. The Right Candidate Needs To Be Fluid In Data warehousing and EMR (Hive, Pig, R, Python). Feature extraction, feature engineering and feature selection. Machine learning, causal inference, statistical algorithms and recommenders. Model evaluation, validation and deployment. Experimental design and testing. Basic Qualifications 8+ years of data scientist or similar role involving data extraction, analysis, statistical modeling and communication experience 7+ years of data querying languages (e.g. SQL), scripting languages (e.g. Python) or statistical/mathematical software (e.g. R, SAS, Matlab, etc.) experience Experience with statistical models e.g. multinomial logistic regression Preferred Qualifications -Experience processing, filtering, and presenting large quantities (100K to Millions of rows) of data -Experience with statistical analysis, data modeling, machine learning, optimizations, regression modeling and forecasting, time series analysis, data mining, and demand modeling -Experience applying various machine learning techniques, and understanding the key parameters that affect their performance -Excellent written and verbal communication skills. Strong ability to interact, communicate, present, and influence within multiple levels of the organization. - Experience in an operational environment developing, fast-prototyping, piloting and launching analytic products -Experience in writing academic-styled papers for presenting both the methodologies used and results for data science projects. - Ability to develop experimental and analytic plans for data modeling processes, use of strong baselines, ability to accurately understand cause and effect relations -Experience in creating data driven visualizations to describe an end-to-end system -Excellent written and verbal communication skills. The role requires effective communication with colleagues from computer science, operations research and business backgrounds. -Ability to work on a diverse team or with a diverse range of coworkers Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Karnataka Job ID: A2778125
Posted 2 weeks ago
4.0 years
1 - 3 Lacs
Hyderābād
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: We are seeking an experienced **AI Engineer** with a minimum of 4 years of experience in developing machine learning models and at least 3 years of experience deploying AI solutions into production. The ideal candidate will be proficient in Python, TensorFlow or PyTorch, and experienced with MLOps tools and cloud platforms. As an AI Engineer in the retail home improvement space, you'll help build intelligent systems that enhance the customer experience, optimize inventory, and drive smarter business decisions. Responsibilities: - Design, develop, and deploy AI and machine learning solutions tailored to retail challenges—such as personalized product recommendations, dynamic pricing, and demand forecasting. - Collaborate with data scientists, product managers, engineers, and retail analysts to develop AI-driven features that improve customer experience and operational efficiency. - Build and manage data pipelines that support large-scale training and inference workloads using structured and semi-structured retail data. - Develop and optimize deep learning models using TensorFlow or PyTorch for applications like visual product search, customer segmentation, and chatbot automation. - Integrate AI models into customer-facing platforms (e.g., mobile apps, websites) and backend retail systems (e.g., inventory management, logistics). - Monitor model performance post-deployment and implement continuous improvement strategies based on business KPIs and real-time data. - Contribute to model governance, testing, and documentation to ensure models are fair, explainable, and secure. - Stay informed about AI trends in the retail and e-commerce industry to help the team stay competitive and innovative. Mandatory skill sets: ‘Must have’ knowledge, skills and experiences · AI Engineer - Tensorflow, Python, Pytorch, Scikit learn, NLP, Deep learning, Supervised learning, MLOPs, CICD, API development with FastAPI, ML System Integrations, Understanding of Jenkins, GitHub Actions and Airflow. Docker and Kubernetes, Model design, development, deployment and maintenance Preferred skill sets: ‘Good to have’ knowledge, skills and experiences · Experience with Front end applications such as Streamlit Years of experience required: 4 Years to 12 years Education qualification: BE, B.Tech, ME, M,Tech, MBA, MCA (60% above) Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Technology, Bachelor of Engineering, Master of Business Administration Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Python (Programming Language), PyTorch, Tensorflow Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 27 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date
Posted 2 weeks ago
6.0 years
2 Lacs
Gurgaon
On-site
Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title and Summary Senior Data Scientist Job Title - Senior Data Scientist – Data & Analytics Our Purpose We work to connect and power an inclusive, digital economy that benefits everyone, everywhere by making transactions safe, simple, smart and accessible. Using secure data and networks, partnerships and passion, our innovations and solutions help individuals, financial institutions, governments and businesses realize their greatest potential. Our decency quotient, or DQ, drives our culture and everything we do inside and outside of our company. We cultivate a culture of inclusion for all employees that respects their individual strengths, views, and experiences. We believe that our differences enable us to be a better team – one that makes better decisions, drives innovation and delivers better business results. Who is Mastercard? Mastercard is a global technology company in the payments industry. Our mission is to connect and power an inclusive, digital economy that benefits everyone, everywhere by making transactions safe, simple, smart, and accessible. Using secure data and networks, partnerships, and passion, our innovations and solutions help individuals, financial institutions, governments, and businesses realize their greatest potential. Our decency quotient, or DQ, drives our culture and everything we do inside and outside of our company. With connections across more than 210 countries and territories, we are building a sustainable world that unlocks priceless possibilities for all. Our Team As consumer preference for digital payments continues to grow, ensuring a seamless and secure consumer experience is top of mind. Optimization Solutions team focuses on tracking of digital performance across all products and regions, understanding the factors influencing performance and the broader industry landscape. This includes delivering data-driven insights and business recommendations, engaging directly with key external stakeholders on implementing optimization solutions (new and existing), and partnering across the organization to drive alignment and ensure action is taken. Are you excited about Data Assets and the value they bring to an organization? Are you an evangelist for data-driven decision-making? Are you motivated to be part of a team that builds large-scale Analytical Capabilities supporting end users across 6 continents? Do you want to be the go-to resource for data science & analytics in the company? The Role You will be part of AI Centre of Excellence, in Core Products Mastercard working hands on ML and AI projects. The candidate, will be the technical lead on solving and identifying Merchant Localization across various global markets. In this role, you will be required to build new ML models to catch merchant localization and scale existing models for recurring inference. You will be required to work closely in collaboration with multiple internal business groups across Mastercard. You are also responsible for creating design documents, including data models, data flow diagrams, and system architecture diagrams. All about You Majors in Computer Science, Data Science, Analytics, Mathematics, Statistics, or a related engineering field or equivalent work experience 6+ Years of experience in using Python and SQL with knowledge of distributed data systems like Data Warehouses 4+ Years of experience on building, deploying and maintaining ML models Demonstrated success interacting with stakeholders to understand technical needs and ensuring analyses and solutions meet their needs effectively. Able to work in a fast-paced, deadline-driven environment as part of a team and as an individual contributor. Ability to easily move between business, analytical, and technical teams and articulate solution requirements for each group. Experience with Enterprise Business Intelligence Platform/Data platform i.e. Tableau, PowerBI, Streamlit will be a plus. Experience with cloud-based (SaaS) solutions, ETL processes or API integrations will be a plus. Experience on Cloud Data Platforms Azure/AWS/Databricks will be a plus. Additional Competencies Excellent English, quantitative, technical, and communication (oral/written) skills Analytical/Problem Solving Strong attention to detail and quality Creativity/Innovation Self-motivated, operates with a sense of urgency Project Management/Risk Mitigation Able to prioritize and perform multiple tasks simultaneously Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines.
Posted 2 weeks ago
6.0 - 8.0 years
2 - 5 Lacs
Gurgaon
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Azure Cloud and Python Developer for ML - Senior 1/2 EY GDS Consulting digital engineering, is seeking an experienced Azure Cloud and Python Developer for ML to join our Emerging Technologies team in DET, GDS. This role presents an exciting opportunity to contribute to innovative projects and be a key player in shaping our technological advancements. The opportunity We are seeking an experienced Azure Cloud and Python Developer with 6-8 years of hands-on experience in machine learning (ML) development. This role involves developing and deploying ML models on the Azure cloud platform, designing efficient data pipelines, and collaborating with data scientists and stakeholders to deliver technical solutions. Your key responsibilities Develop and deploy machine learning models on Azure cloud platform using Python programming language, ensuring scalability and efficiency. Design and implement scalable and efficient data pipelines for model training and inference, optimizing data processing workflows. Collaborate closely with data scientists and business stakeholders to understand requirements, translate them into technical solutions, and deliver high-quality ML solutions. Implement best practices for ML development, including version control using tools like Git, testing methodologies, and documentation to ensure reproducibility and maintainability. Design and optimize ML algorithms and data structures for performance and accuracy, leveraging Azure cloud services and Python libraries such as TensorFlow, PyTorch, or scikit-learn. Monitor and evaluate model performance, conduct experiments, and iterate on models to improve predictive accuracy and business outcomes. Work on feature engineering, data preprocessing, and feature selection techniques to enhance model performance and interpretability. Collaborate with DevOps teams to deploy ML models into production environments, ensuring seamless integration and continuous monitoring. Stay updated with the latest advancements in ML, Azure cloud services, and Python programming, and apply them to enhance ML capabilities and efficiency. Provide technical guidance and mentorship to junior developers and data scientists, fostering a culture of continuous learning and innovation. Skills and attributes Soft Skills Bachelor's or master's degree in computer science, data science, or related field, with a strong foundation in ML algorithms, statistics, and programming concepts. Minimum 6-8 years of hands-on experience in developing and deploying ML models on Azure cloud platform using Python programming language. Expertise in designing and implementing scalable data pipelines for ML model training and inference, utilizing Azure Data Factory, Azure Databricks, or similar tools. Proficiency in Python programming language, including libraries such as TensorFlow, PyTorch, scikit-learn, pandas, and NumPy for ML model development and data manipulation. Strong understanding of ML model evaluation metrics, feature engineering techniques, and data preprocessing methods for structured and unstructured data. Experience with cloud-native technologies and services, including Azure Machine Learning, Azure Kubernetes Service (AKS), Azure Functions, and Azure Storage. Familiarity with DevOps practices, CI/CD pipelines, and containerization tools like Docker for ML model deployment and automation. Excellent problem-solving skills, analytical thinking, and attention to detail, with the ability to troubleshoot and debug complex ML algorithms and systems. Effective communication skills, both verbal and written, with the ability to explain technical concepts to non-technical stakeholders and collaborate in cross-functional teams. Proactive and self-motivated attitude, with a passion for learning new technologies and staying updated with industry trends in ML, cloud computing, and software development. Strong organizational skills and the ability to manage multiple projects, prioritize tasks, and deliver results within project timelines and specifications. Business acumen and understanding of the impact of ML solutions on business operations and decision-making processes, with a focus on delivering value and driving business outcomes. Collaboration and teamwork skills, with the ability to work effectively in a global, diverse, and distributed team environment, fostering a culture of innovation and continuous improvement.. To qualify for the role, you must have A bachelor's or master's degree in computer science, data science, or related field, along with a minimum of 6-8 years of experience in ML development and Azure cloud platform expertise. Strong communication skills and consulting experience are highly desirable for this position. Ideally, you’ll also have Analytical ability to manage complex ML projects and prioritize tasks efficiently. Experience operating independently or with minimal supervision, demonstrating strong problem-solving skills. Familiarity with other cloud platforms and technologies such as AWS, Google Cloud Platform (GCP), or Kubernetes is a plus. What working at EY offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 2 weeks ago
4.0 years
6 - 10 Lacs
India
On-site
Location : On-site / Indore Experience : 4–8 years Responsibilities : Set up edge-side services for video analytics (driver monitoring, face recognition, ADAS). Integrate Jetson Orin/Nano or similar boards with camera, mic, speaker, and sensors. Optimize pipelines for multi-camera inference, power efficiency, and low latency. Enable OBD/CAN communication and diagnostics protocols. Deploy and monitor system updates (OTA, Docker, Mender, etc.). Requirements : Strong Python/C++ with edge AI toolkits (DeepStream, TensorRT, OpenCV, PyTorch). Experience with Jetson platform, CAN tools, and sensors (IMU, GPS, light, baro). Familiar with protocols like MQTT, gRPC, and secure edge-cloud sync. Job Type: Full-time Pay: ₹51,068.06 - ₹91,380.10 per month Benefits: Paid sick time Paid time off Schedule: Monday to Friday Work Location: In person Speak with the employer +91 7880122103
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough