Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
0 years
0 Lacs
Gurugram, Haryana, India
On-site
π AI Engineering Intern (SDE) β Founding Tech Interns | Opportunity of a Lifetime Location: Gurgaon (In-Office) Duration: 3β6 months (Flexible based on academic schedule) Start Date: Immediate openings Open to: Tier 1 college students graduating in 2025 and 2026 Compensation: Stipend + Pre-Placement Offer potential π§ About Us β Darwix AI Darwix AI is on a mission to solve a problem no one's cracked yet β building real-time, multilingual conversational intelligence for omnichannel enterprise sales teams using the power of Generative AI. We're building Indiaβs answer to Gong + Refract + Harvey AI β trained on 1M+ hours of sales conversations, and packed with industry-first features like live agent coaching , speech-to-text in 11 Indic languages , and autonomous sales enablement nudges . Weβve got global clients, insane velocity, and a team of ex-operators from IIMs, IITs, and top-tier AI labs. π Why This Internship is Unlike Anything Else π‘ Work on a once-in-a-decade problem β pushing the boundaries of GenAI + Speech + Edge compute. π οΈ Ship real products used by enterprise teams across India & the Middle East. π§ͺ Experiment freely β train models, optimize pipelines, fine-tune LLMs, or build scrapers that work in 5 languages. π Move fast, learn faster β direct mentorship from the founding engineering and AI team. π Proof-of-excellence opportunity β stand out in every future job, B-school, or YC application. π» What You'll Do Build and optimize core components of our real-time agent assist engine (Python + FastAPI + Kafka + Redis). Train, evaluate, and integrate whisper, wav2vec, or custom STT models on diverse datasets. Work on LLM/RAG pipelines , prompt engineering, or vector DB integrations. Develop internal tools to analyze, visualize, and scale insights from conversations across languages. Optimize for latency, reliability, and multilingual accuracy in dynamic customer environments. π Who You Are Pursuing a B.Tech/B.E. or dual degree from IITs, IIITs, BITS, NIT Trichy/Warangal/Surathkal, or other top-tier institutes. Comfortable with Python , REST APIs, and database operations. Bonus: familiarity with FastAPI, Langchain, or HuggingFace. Passionate about AI/ML, especially NLP, GenAI, ASR, or multimodal systems. Always curious, always shipping, always pushing yourself beyond the brief. Looking for an internship that actually matters β not one where you're just fixing CSS. π Tech Youβll Touch Python, FastAPI, Kafka, Redis, MongoDB, Postgres Whisper, Deepgram, Wav2Vec, HuggingFace Transformers OpenAI, Anthropic, Gemini APIs LangChain, FAISS, Pinecone, LlamaIndex Docker, GitHub Actions, Linux environments π― Whatβs in it for you A pre-placement offer for the best performers. A chance to be a founding engineer post-graduation. Exposure to the VC ecosystem , client demos, and GTM strategies. Stipend + access to tools/courses/compute resources you need to thrive. π Ready to Build the Future? If youβre one of those rare folks who can combine deep tech with deep curiosity , this is your call to adventure. Join us in building something thatβs never been done before. Apply now at careers@cur8.in Attach your CV + GitHub/Portfolio + a line on why this excites you. Bonus points if you share a project youβve built or an AI problem youβre obsessed with. Darwix AI | GenAI for Revenue Teams | Built from India for the World Show more Show less
Posted 5 days ago
4.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
We are seeking a highly skilled Product Data Engineer with expertise in building, maintaining, and optimizing data pipelines using Python scripting. The ideal candidate will have experience working in a Linux environment, managing large-scale data ingestion, processing files in S3, and balancing disk space and warehouse storage efficiently. This role will be responsible for ensuring seamless data movement across systems while maintaining performance, scalability, and reliability. Key Responsibilities: ETL Pipeline Development: Design, develop, and maintain efficient ETL workflows using Python to extract, transform, and load data into structured data warehouses. Data Pipeline Optimization: Monitor and optimize data pipeline performance, ensuring scalability and reliability in handling large data volumes. Linux Server Management: Work in a Linux-based environment, executing command-line operations, managing processes, and troubleshooting system performance issues. File Handling & Storage Management: Efficiently manage data files in Amazon S3, ensuring proper storage organization, retrieval, and archiving of data. Disk Space & Warehouse Balancing: Proactively monitor and manage disk space usage, preventing storage bottlenecks and ensuring warehouse efficiency. Error Handling & Logging: Implement robust error-handling mechanisms and logging systems to monitor data pipeline health. Automation & Scheduling: Automate ETL processes using cron jobs, Airflow, or other workflow orchestration tools. Data Quality & Validation: Ensure data integrity and consistency by implementing validation checks and reconciliation processes. Security & Compliance: Follow best practices in data security, access control, and compliance while handling sensitive data. Collaboration with Teams: Work closely with data engineers, analysts, and product teams to align data processing with business needs. Skills Required: Proficiency in Python: Strong hands-on experience in writing Python scripts for ETL processes. Linux Expertise: Experience working with Linux servers, command-line operations, and system performance tuning. Cloud Storage Management: Hands-on experience with Amazon S3, including handling file storage, retrieval, and lifecycle policies. Data Pipeline Management: Experience with ETL frameworks, data pipeline automation, and workflow scheduling (e.g., Apache Airflow, Luigi, or Prefect). SQL & Database Handling: Strong SQL skills for data extraction, transformation, and loading into relational databases and data warehouses. Disk Space & Storage Optimization: Ability to manage disk space efficiently, balancing usage across different systems. Error Handling & Debugging: Strong problem-solving skills to troubleshoot ETL failures, debug logs, and resolve data inconsistencies. Nice to Have: Experience with cloud data warehouses (e.g., Snowflake, Redshift, BigQuery). Knowledge of message queues (Kafka, RabbitMQ) for data streaming. Familiarity with containerization tools (Docker, Kubernetes) for deployment. Exposure to infrastructure automation tools (Terraform, Ansible). Qualifications: Bachelorβs degree in Computer Science, Data Engineering, or a related field. 4+ years of experience in ETL development, data pipeline management, or backend data engineering. Strong analytical mindset and ability to handle large-scale data processing efficiently. Ability to work independently in a fast-paced, product-driven environment. Show more Show less
Posted 5 days ago
0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Role Overview We are looking for a highly skilled quantitative trading specialist to set up and execute our mid- frequency and low-frequency trading desk. The candidate will be responsible for end-to-end implementation, strategy development, execution systems, risk management, and infrastructure deployment. Key Responsibilities ο· Infrastructure Setup: Design, implement, and maintain robust trading infrastructure, including data servers, execution servers, and connectivity to brokers and exchanges. ο· Real-Time Data Management: Develop and maintain real-time market data feeds via WebSocket APIs, managing latency and ensuring reliability. ο· Strategy Development Framework: Establish best practices and tools for strategy development, backtesting, forward testing, and deployment. ο· Execution System Development: Write robust execution code ensuring low latency, reliability, proper risk handling, and error management. ο· Risk Management: Implement real-time risk monitoring systems and controls, including setting position limits, managing market risks, and compliance with regulatory requirements. ο· Monitoring and Alerting: Set up comprehensive monitoring dashboards, alerting mechanisms, and logging systems using tools like Prometheus, Grafana, and ELK stack. ο· Team Coordination: Coordinate with quantitative researchers, developers, DevOps engineers, and analysts to ensure seamless operations. ο· Documentation and Compliance: Ensure thorough documentation of systems, processes, risk procedures, and maintain compliance with SEBI/NSE/BSE regulatory guidelines. Required Skills And Qualifications ο· Expert knowledge of quantitative trading strategies and market microstructure. ο· Strong proficiency in Python, familiarity with C++/Rust for latency-critical components. ο· Extensive experience in WebSocket API integration, real-time data handling (Kafka, Redis), and database management (PostgreSQL, TimescaleDB, MongoDB). ο· Proficiency with CI/CD workflows, GitLab/GitHub, Docker, Kubernetes, and cloud services (AWS/GCP). ο· Experience in implementing robust risk management frameworks and understanding regulatory compliance in Indian markets. ο· Strong analytical skills, problem-solving abilities, and attention to detail. Preferred Experience ο· Prior experience establishing or managing a quant trading desk in mid to low-frequency trading environments. ο· Background in trading Indian equity, futures, and options markets. Reporting The role will report directly to senior management and will collaborate closely with the trading,technology, and risk management teams. Skills: market microstructure,regulatory compliance,rust,docker,trading,real-time data handling,postgresql,kafka,cloud services,websocket api integration,gitlab/github,gcp,ci/cd workflows,redis,python,risk management frameworks,aws,mongodb,timescaledb,kubernetes,risk management,quantitative trading strategies,c++ Show more Show less
Posted 5 days ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About tsworks: tsworks is a leading technology innovator, providing transformative products and services designed for the digital-first world. Our mission is to provide domain expertise, innovative solutions and thought leadership to drive exceptional user and customer experiences. Demonstrating this commitment , we have a proven track record of championing digital transformation for industries such as Banking, Travel and Hospitality, and Retail (including e-commerce and omnichannel), as well as Distribution and Supply Chain, delivering impactful solutions that drive efficiency and growth. We take pride in fostering a workplace where your skills, ideas, and attitude shape meaningful customer engagements. About This Role: tsworks Technologies India Private Limited is seeking driven and motivated Senior Data Engineers to join its Digital Services Team. You will get hands-on experience with projects employing industry-leading technologies. This would initially be focused on the operational readiness and maintenance of existing applications and would transition into a build and maintenance role in the long run. Requirements Position: Data Engineer II Experience: 3 to 10+ Years Location: Bangalore, India Mandatory Required Qualification Strong proficiency in Azure services such as Azure Data Factory, Azure Databricks, Azure Synapse Analytics, Azure Storage, etc. Expertise in DevOps and CI/CD implementation Good knowledge in SQL Excellent Communication Skills In This Role, You Will Design, implement, and manage scalable and efficient data architecture on the Azure cloud platform. Develop and maintain data pipelines for efficient data extraction, transformation, and loading (ETL) processes. Perform complex data transformations and processing using Azure Data Factory, Azure Databricks, Snowflake's data processing capabilities, or other relevant tools. Develop and maintain data models within Snowflake and related tools to support reporting, analytics, and business intelligence needs. Collaborate with cross-functional teams to understand data requirements and design appropriate data integration solutions. Integrate data from various sources, both internal and external, ensuring data quality and consistency. Ensure data models are designed for scalability, reusability, and flexibility. Implement data quality checks, validations, and monitoring processes to ensure data accuracy and integrity across Azure and Snowflake environments. Adhere to data governance standards and best practices to maintain data security and compliance. Handling performance optimization in ADF and Snowflake platforms Collaborate with data scientists, analysts, and business stakeholders to understand data needs and deliver actionable insights Provide guidance and mentorship to junior team members to enhance their technical skills. Maintain comprehensive documentation for data pipelines, processes, and architecture within both Azure and Snowflake environments including best practices, standards, and procedures. Skills & Knowledge Bachelor's or Master's degree in Computer Science, Engineering, or a related field. 3 + Years of experience in Information Technology, designing, developing and executing solutions. 3+ Years of hands-on experience in designing and executing data solutions on Azure cloud platforms as a Data Engineer. Strong proficiency in Azure services such as Azure Data Factory, Azure Databricks, Azure Synapse Analytics, Azure Storage, etc. Familiarity with Snowflake data platform would be an added advantage. Hands-on experience in data modelling, batch and real-time pipelines, using Python, Java or JavaScript and experience working with Restful APIs are required. Expertise in DevOps and CI/CD implementation. Hands-on experience with SQL and NoSQL databases. Hands-on experience in data modelling, implementation, and management of OLTP and OLAP systems. Experience with data modelling concepts and practices. Familiarity with data quality, governance, and security best practices. Knowledge of big data technologies such as Hadoop, Spark, or Kafka. Familiarity with machine learning concepts and integration of ML pipelines into data workflows Hands-on experience working in an Agile setting. Is self-driven, naturally curious, and able to adapt to a fast-paced work environment. Can articulate, create, and maintain technical and non-technical documentation. Public cloud certifications are desired. Show more Show less
Posted 5 days ago
9.0 - 14.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Greetings from TCS. TCS is Hiring For AI Engineer Experience: 9-14 years Relevant Experience: 9-14 years WORK Location: PAN India Job Description- Must Have 9- 14 years of IT experience Strong programming skills in Python Expertise in machine learning is a must. In depth knowledge of various machine learning models and techniques; deep learning, supervised and unsupervised learning, natural language processing, and reinforcement learning. Expertise in data analysis and visualization to extract insights from large datasets and transmit them virtually. Good knowledge on data mining, statistical methods, data wrangling, and visualization tools like Power BI, Tableau and matplotlib. Hands on skills in Data Manipulation Language. Expertise in various machine learning frameworks - TensorFlow, Scikit-Learn and PyTorch. Good to Have - Gen AI Certification Experience in Containers (Docker), Kubernetes, Kafka (or other messaging platform), Apache Camel, RabbitMQ, Active MQ, Storage / RDBMS and No-SQL databases etc.. Show more Show less
Posted 5 days ago
6.0 years
0 Lacs
India
Remote
Position Overview We seek a talented Frontend First Full-Stack Engineer with 6+ years of experience to join our dynamic software consulting team. As a Full-Stack Engineer, you will be responsible for developing and implementing high-quality software solutions for our clients, working on both front-end and back-end aspects of projects. Primary Skill Sets Frontend: React.js, TypeScript, Tan Stack Query, React Render Flow, Next.js Backend: Node.js, Express.js, Nest JS, Sequelize ORM , Server-Sent Events (SSE), WebSocketβs, Event Emitter. Stylesheets : MUI, Tailwind Secondary Skill Sets Messaging Systems: Apache Kafka, AWS SQS, RabbitMQ Containerization & Orchestration: Docker, Kubernetes (Bonus) Databases & Caching: Redis, Elasticsearch, MySQL, PostgreSQL Bonus Experience - Proven experience in building Agentic UX to enable intelligent, goal-driven user interactions. - Hands-on expertise in designing and implementing complex workflow management systems. - Developed Business Process Management (BPM) platforms and dynamic application builders for configurable enterprise solutions. Show more Show less
Posted 5 days ago
1.0 years
0 Lacs
India
Remote
About the job π AI Fullstack Engineer (Mid-Level) Location: Remote (Brasilia Time UTC-3 or Dubai Time GMT+3 preferred) | Team: Tech team for Product Engineering Seniority: 1 β 3 yrs production experience | Stack: Python / TypeScript / React + LLMs π About Us - Reimagining Venture Building with AI at the Core We're Mundos, the world's first AI-native Venture Builder architecting the next generation of intelligent, high-impact businesses. Unlike traditional incubators or studios, we embed advanced AI capabilities from day zero, transforming how ventures are conceived, built, and scaled globally. We operate at the convergence of visionary strategy and technical execution; identifying opportunities not visible to the naked eye, then rapidly materializing them through our proprietary AI venture building methodology and fast-paced engineering muscle. Working alongside forward-thinking partners across MENA and LATAM, we're not just implementing AI; we're fundamentally rethinking business models around AI's capabilities. While others talk about AI transformation, we're already shipping it: moving with startup velocity but maintaining institutional-grade discipline and quality and seamless user experiences. Our globally distributed team unites serial entrepreneurs, AI researchers, and seasoned operators who share one trait: the ability to translate cutting-edge AI capabilities into tangible business impact. We're seeking a versatile software engineer who thrives in high-velocity environments, ships production-ready code across the full stack, and is eager to help architect the future of AI-powered applications, and grow into an AI-powered engineering team. π©βπ» What Youβll BuildArchitect & Build: Create robust RESTful/GraphQL APIs that power both internal tools and customer-facing applications in our venture portfolio AI Integration: Implement and optimize RAG pipelines, vector DB integrations, PostgreSQL, Redis, external APIs and LLM orchestration layers that deliver intelligence, not just responses Full-Stack Mastery: Own feature development from back-end logic to polished React UIs (TypeScript/Javascript), balancing technical elegance with business velocity Team Collaboration: Work directly with our founding AI engineer and senior engineering leadership while mentoring junior talentβwe grow together, and deliver fast iterations Agile Execution: Drive from sprint planning to deployment, with ownership across the entire development lifecycle. Write clean LLDs and participate in sprint planning, code reviews, and deployment automation Infrastructure Evolution: Deploy and manage services using Docker and cloud infrastructure (AWS/GCP) π§ Your Toolkit Production Impact: 1-3 years building software that real users depend on (not just internships or side projects) Technical Foundation: Solid understanding of API design principles, database architecture and schema design, and error handling Data Expertise: Experience with PostgreSQL/MySQL and performance optimization with key-value stores like Redis Modern Architecture: Hands-on with event-driven systems, message queues (Kafka/RabbitMQ), or serverless functions AI Fluency: Working knowledge of LLM integration using both closed and open-source modelsβyou understand prompts and parameters, not just APIs Frontend Proficiency: Comfort with React hooks, state management solutions (Redux/Zustand), and component libraries that deliver pixel-perfect experiences Cloud-Native Thinking: Familiarity with containerization, CI/CD pipelines, and infrastructure-as-code approaches (GCP, AWS or Azure) Ownership Mindset: You don't just build itβyou own it, monitor it, and continuously improve and iterate it π Bonus Points AI Engineering Experience: Built or contributed to RAG pipelines, AI agents, LangGraph implementations, or LlamaIndex applications AI-Adjacent Projects: Developed chatbots, NLP tools, data pipelines, recommendation systems, or other ML-enhanced applications Venture Building Spirit: Experience in fast-paced environments where you wear multiple hats and contribute beyond your job description π± Why Youβll Enjoy This RoleEngineering Excellence: We prioritize robust, maintainable code over glossy demos; real engineering for real business impact True Ownership: You won't just be implementing specs; you'll help shape our technical direction and architecture Remote-First Culture: Work where and when you're most productive, with async-first communication and results-oriented leadership Velocity Without Chaos: We move quickly but deliberately, with proper planning and sustainable pace π‘ Why Join Mundos Venture Building DNA: Your code doesn't just ship features; it builds entire businesses that can scale independently Small team, huge canvas: your code lands in production within days, not quarters. Global Impact: Work on ventures that span multiple markets, cultures, and business models Exponential Learning: Exposure to multiple ventures means accelerated growth across domains and technologies Founder-Level Opportunities: Early team members grow into leadership roles as our ventures mature Competitive Compensation: USD salary, equity (ESOPs) in our venture ecosystem, flexible remote work, and a clearly defined growth trajectory Note: This is a contract based opportunity that can be extended to full time hires. π¬ How to ApplySend your resume (required), a thoughtful cover letter (required) and GitHub profile (required) to anish.yog10@gmail.com Tell us in two sentences about a feature you shipped that made users smile (required). Incomplete applications will not be considered. We value attention to detail as much as technical skill. Join us in building the next generation of AI-native venturesβwhere technical excellence meets entrepreneurial vision to solve meaningful problems at global scale Show more Show less
Posted 5 days ago
10.0 years
0 Lacs
India
Remote
Job Title: Senior Backend Engineer β Python & Microservices Location: Remote Experience Required: 8β10+ years π About the Role: Weβre looking for a Senior Backend Engineer (Python & Microservices) to join a high-impact engineering team focused on building scalable internal tools and enterprise SaaS platforms. You'll play a key role in designing cloud-native services, leading microservices architecture, and collaborating closely with cross-functional teams in a fully remote environment. π§ Responsibilities: Design and build scalable microservices using Python (Flask, FastAPI, Django) Develop production-grade RESTful APIs and background job systems Architect modular systems and drive microservice decomposition Manage SQL & NoSQL data models (PostgreSQL, MongoDB, DynamoDB, ClickHouse) Implement distributed data pipelines using Kafka, RabbitMQ, and SQS Apply best practices in rate limiting, security, performance optimisation, logging, and observability (Grafana, Datadog, CloudWatch) Deploy services in cloud environments (AWS preferred, Azure/GCP acceptable) using Docker, Kubernetes, and EKS Contribute to CI/CD and Infrastructure as Code (Jenkins, Terraform, GitHub Actions) β Requirements: 8β10+ years of hands-on backend development experience Strong proficiency in Python (Flask, FastAPI, Django, etc.) Solid experience with microservices and containerised environments (Docker, Kubernetes, EKS) Expertise in REST API design, rate limiting, and performance tuning Familiarity with SQL & NoSQL (PostgreSQL, MongoDB, DynamoDB, ClickHouse) Experience with cloud platforms (AWS preferred; Azure/GCP also considered) CI/CD and IaC knowledge (GitHub Actions, Jenkins, Terraform) Exposure to distributed systems and event-based architectures (Kafka, SQS) Excellent written and verbal communication skills π― Preferred Qualifications: Bachelorβs or Masterβs degree in Computer Science or a related field Certifications in Cloud Architecture or System Design Experience integrating with tools like Zendesk, Openfire, or similar chat/ticketing platforms Show more Show less
Posted 5 days ago
3.0 years
0 Lacs
India
Remote
WSO2 Engineers Work Location WFH (Preferred locations - Bengaluru, Chennai, Hyderabad) - They can work from home but should be from these locations only, same location as if required should be available to come to office About The Role We are seeking skilled and motivated WSO2 Engineers to join our Integration Team. You will be responsible for designing, developing, deploying, and supporting enterprise integration solutions using WSO2βs suite of products, including API Manager, Enterprise Integrator, and Identity Server. This is a hands-on technical role with the opportunity to work on high-impact projects across various business domains. Key Responsibilities Develop and maintain integration solutions using WSO2 API Manager, Enterprise Integrator (ESB), and Identity Server. Build and expose REST/SOAP APIs and integrate them with backend services and third-party systems. Work closely with architects and business analysts to understand integration requirements and deliver effective solutions. Configure security policies (OAuth2, JWT, SAML, etc.) on WSO2 components. Troubleshoot and resolve issues related to performance, functionality, and connectivity in integration flows. Contribute to automation, CI/CD pipelines, and monitoring for integration components. Write technical documentation and maintain clear records of configuration and development artifacts. Stay current with WSO2 product updates and integration trends. Requirements Must-Have: Bachelor's degree in Computer Science, Engineering, or a related field. 3+ years of hands-on experience with WSO2 API Manager, Enterprise Integrator, and/or Identity Server. Strong understanding of REST/SOAP services, XML, JSON, XSLT, and integration protocols (HTTP, JMS, FTP, etc.). Experience with OAuth2, JWT, and basic API security concepts. Familiarity with middleware, message brokering, and transformation logic. Strong debugging and problem-solving skills. Experience with tools like Git, Maven, and Jenkins. Nice-to-Have Experience with Docker, Kubernetes, and cloud platforms (AWS, Azure, GCP). Knowledge of event-driven architecture (Kafka, RabbitMQ). Exposure to DevOps practices and scripting (Shell, Python). WSO2 certification (e.g., WSO2 Certified Integration Developer). Skills: maven,rabbitmq,integration,enterprise integrator,oauth2,enterprise,docker,git,xml,jms,kafka,jenkins,wso2 api manager,jwt,rest,ftp,http,gcp,soap,json,azure,shell,identity server,python,aws,api,components,xslt,kubernetes,wso2 Show more Show less
Posted 5 days ago
3.0 years
0 Lacs
India
Remote
About The Company Teikametrics' AI-powered Marketplace Optimization platform help sellers and brand owners to maximize their potential on the world's most valuable marketplaces. Founded in 2015, Teikametrics uses Proprietary AI technology to maximize profitability in a simple SaaS interface. Teikametrics optimizes more than $8 billion in GMV across thousands of sellers around the world, with brands including Munchkin, mDesign, Clarks, Nutribullet, Conair, Nutrafol, and Solo Stove trusting Teikametrics to unlock the full potential of their selling and advertising on Amazon, Walmart and other marketplaces. Teikametrics continues to grow exponentially, with teams spanning 3+ countries. We are financially strong, continuously meeting or exceeding revenue targets, and we invest heavily in strengthening the foundation of our organization. About The Role Teikametrics is seeking a Backend Software Engineer to design, develop, and maintain tools that empower users to process and visualize analytical data at scale. This role focuses on building high-performance, scalable systems using a modern tech stack, including Java, Spring Boot, Kafka, Postgres, and AWS services. The position offers a unique opportunity to work on cutting-edge cloud-based solutions that drive actionable insights for businesses optimizing their operations. How You'll Spend Your Time Develop scalable software solutions that align with customer needs, enhancing performance, functionality, and adaptability to growth in user demand, data, and feature expansion. Continuously monitor and optimize application performance, addressing any potential bottlenecks or inefficiencies. Implement data validation and quality checks to ensure accuracy and consistency Collaborate with product managers, UX designers, and other stakeholders to understand product requirements and deliver solutions that meet or exceed expectations. Document technical designs, workflows, and best practices to facilitate knowledge sharing and maintain system documentation. Who You Are 3-5 years as a software developer, specifically in web applications. Hands-on experience with HTTP, web services, and the overall web application development lifecycle. Proficiency in Java with Spring/Spring boot framework Competency with SQL and RDBMS for efficient database interaction. Exposure to NoSQL databases is preferred. Experience with Docker, Kubernetes with Dockerization (nice to have) Ability to set up reusable, testable and performant components, allowing for rapid development and well-organized code. Strong design sensibilities and informed opinions on usability and design for web applications. Passion for working with a small team of world-class developers, solving challenging problems. A desire to work in a collaborative environment focusing on continuous learning; participating in mentoring, tech talks, documentation, code review, and some pair programming. WE'VE GOT YOU COVERED Every Teikametrics employee is eligible for company equity Remote Work β flexibility to work from home or from our offices + remote working allowance Broadband reimbursement Group Medical Insurance β Coverage of INR 7,50,000 per annum for a family CrΓ¨che benefit Training and development allowance Press Reference About Teika Teikametricsβ Marketplace Optimization Platform, Flywheel 2.0, Adds AI-Powered Automation to Maximize Advertising Performance Across Marketplaces The job description is representative of typical duties and responsibilities for the position and is not all-inclusive. Other duties and responsibilities may be assigned in accordance with business needs. We are proud to be an equal opportunity employer. A background check will be conducted after a conditional offer of employment is extended. Show more Show less
Posted 5 days ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About Simpleenergy Simpleenergy specializes in the manufacture of smart electric two-wheelers. We are a team of 300+ engineers coming together to make smart, supercharging, and affordable two-wheelers. The company was founded in 2019 and is based in Bangalore, India. Our mission is to build the future of mobility that is electric and connected. We at Simple energy are working towards accelerating by making them more accessible, affordable, secure and comfortable and we embrace the responsibility to lead the change that will make our world better, safer and more equitable for all. Job description: Data Engineer Location: Yelahanka, Bangalore About The Gig Weβre on the lookout for a Data Engineer who loves building scalable data pipelines and can dance with Kafka and Flink like theyβre on their playlist. If Spark is your old buddy, even betterβbut itβs not a deal-breaker. What Youβll Do Design, build, and maintain real-time and batch data pipelines using Apache Kafka and Apache Flink. Ensure high-throughput, low-latency, and fault-tolerant data ingestion for telemetry, analytics, and system monitoring. Work closely with backend and product teams to define event contracts and data models. Maintain schema consistency and versioning across high-volume event streams. Optimize Flink jobs for memory, throughput, and latency. If you know a little Spark, help out with batch processing and offline analytics too (we wonβt complain) Ensure data quality, lineage, and observability for everything that flows through your pipelines. What You Bring 3+ years of experience as a data/backend engineer working with real-time or streaming systems. Hands-on experience with Kafka (topics, partitions, consumers, etc.). Experience writing production-grade Flink jobs (DataStream API preferred). Good fundamentals in distributed systems, partitioning strategies, and stateful processing. Comfortable with any one programming language β Java, Scala, or Python. Basic working knowledge of Spark is a plus (optional, but nice to have). Comfortable working in a cloud-native environment (GCP or AWS). π Bonus Points Experience with Protobuf/Avro schemas and schema registry. Exposure to time-series data (we live and breathe CAN signals). Interest in vehicle data, IoT, or edge computing. Why Simple Energy? Youβll build pipelines that move billions of records a day from electric vehicles across India. Youβll be part of a lean, fast-moving team where decisions happen fast and learning is constant. Your code will directly impact how we track, monitor, and improve our vehicles on the road. Zero fluff. Full impact. Skills: scala,cloud-native environments,time-series data,data quality,java,avro,batch data pipelines,pipelines,apache flink,data ingestion,flink,kafka,data lineage,distributed systems,gcp,python,real-time data pipelines,aws,data,protobuf,apache kafka Show more Show less
Posted 5 days ago
4.0 years
0 Lacs
Kochi, Kerala, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Preferred Education Master's Degree Required Technical And Professional Expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala ; Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Preferred Technical And Professional Experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers Show more Show less
Posted 5 days ago
2.0 years
0 Lacs
Panchkula, Haryana
On-site
Job Title: Java Developer Experience: Minimum 2 years Location: OnSite Employment Type: Full-time Job Overview: We are seeking a skilled Java Developer to join our team in building scalable and high-performance applications for the fleet management industry. The ideal candidate should have at least 2 years of experience in Java development, with expertise in Spring Boot, Microservices, Kafka Streams, and AWS. Key Responsibilities: Develop, deploy, and maintain microservices using Spring Boot. Design and implement Kafka Streams for real-time data processing. Optimize and manage PostgreSQL databases. Work with AWS services for cloud-based deployments and scalability. Collaborate with cross-functional teams to design, develop, and test features. Ensure system performance, reliability, and security best practices. Troubleshoot and resolve technical issues efficiently.Required Skills & Qualifications: 2+ years of experience in Java development. Strong knowledge of Spring Boot and Microservices architecture. Experience with Kafka Streams and real-time data processing. Proficiency in PostgreSQL, including writing optimized queries. Hands-on experience with AWS services (EC2, S3, Lambda, etc.). Familiarity with CI/CD pipelines and containerization (Docker/Kubernetes). Strong problem-solving and debugging skills. Good understanding of RESTful APIs and event-driven architectures. Job Type: Full-time Pay: βΉ30,000.00 - βΉ80,000.00 per month Benefits: Cell phone reimbursement Paid sick time Schedule: Day shift Monday to Friday Supplemental Pay: Yearly bonus Experience: 5G: 2 years (Preferred) Location: Panchkula, Haryana (Preferred) Work Location: In person
Posted 5 days ago
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
π Job Title: AI Engineer Company : Darwix AI Location : Gurgaon (On-site) Type : Full-Time Experience : 2-6 Years Level : Senior Level π About Darwix AI Darwix AI is one of Indiaβs fastest-growing GenAI startups, revolutionizing the future of enterprise sales and customer engagement with real-time conversational intelligence. We are building a GenAI-powered agent-assist and pitch intelligence suite that captures, analyzes, and enhances every customer interactionβacross voice, video, and chatβin real time. We serve leading enterprise clients across India, the UAE, and Southeast Asia and are backed by global VCs, top operators from Google, Salesforce, and McKinsey, and CXOs from the industry. This is your opportunity to join a high-caliber founding tech team solving frontier problems in real-time voice AI, multilingual transcription, retrieval-augmented generation (RAG), and fine-tuned LLMs at scale. π§ Role Overview As the AI Engineer , you will drive the development, deployment, and optimization of AI systems that power Darwix AI's real-time conversation intelligence platform. This includes voice-to-text transcription, speaker diarization, GenAI summarization, prompt engineering, knowledge retrieval, and real-time nudge delivery. You will lead a team of AI engineers and work closely with product managers, software architects, and data teams to ensure technical excellence, scalable architecture, and rapid iteration cycles. This is a high-ownership, hands-on leadership role where you will code, architect, and lead simultaneously. π§ Key Responsibilities 1. AI Architecture & Model Development Architect end-to-end AI pipelines for transcription, real-time inference, LLM integration, and vector-based retrieval. Build, fine-tune, and deploy STT models (Whisper, Wav2Vec2.0) and diarization systems for speaker separation. Implement GenAI pipelines using OpenAI, Gemini, LLaMA, Mistral, and other LLM APIs or open-source models. 2. Real-Time Voice AI System Development Design low-latency pipelines for capturing and processing audio in real-time across multi-lingual environments. Work on WebSocket-based bi-directional audio streaming, chunked inference, and result caching. Develop asynchronous, event-driven architectures for voice processing and decision-making. 3. RAG & Knowledge Graph Pipelines Create retrieval-augmented generation (RAG) systems that pull from structured and unstructured knowledge bases. Build vector DB architectures (e.g., FAISS, Pinecone, Weaviate) and connect to LangChain/LlamaIndex workflows. Own chunking, indexing, and embedding strategies (OpenAI, Cohere, Hugging Face embeddings). 4. Fine-Tuning & Prompt Engineering Fine-tune LLMs and foundational models using RLHF, SFT, PEFT (e.g., LoRA) as needed. Optimize prompts for summarization, categorization, tone analysis, objection handling, etc. Perform few-shot and zero-shot evaluations for quality benchmarking. 5. Pipeline Optimization & MLOps Ensure high availability and robustness of AI pipelines using CI/CD tools, Docker, Kubernetes, and GitHub Actions. Work with data engineering to streamline data ingestion, labeling, augmentation, and evaluation. Build internal tools to benchmark latency, accuracy, and relevance for production-grade AI features. 6. Team Leadership & Cross-Functional Collaboration Lead, mentor, and grow a high-performing AI engineering team. Collaborate with backend, frontend, and product teams to build scalable production systems. Participate in architectural and design decisions across AI, backend, and data workflows. π οΈ Key Technologies & Tools Languages & Frameworks : Python, FastAPI, Flask, LangChain, PyTorch, TensorFlow, HuggingFace Transformers Voice & Audio : Whisper, Wav2Vec2.0, DeepSpeech, pyannote.audio, AssemblyAI, Kaldi, Mozilla TTS Vector DBs & RAG : FAISS, Pinecone, Weaviate, ChromaDB, LlamaIndex, LangGraph LLMs & GenAI APIs : OpenAI GPT-4/3.5, Gemini, Claude, Mistral, Meta LLaMA 2/3 DevOps & Deployment : Docker, GitHub Actions, CI/CD, Redis, Kafka, Kubernetes, AWS (EC2, Lambda, S3) Databases : MongoDB, Postgres, MySQL, Pinecone, TimescaleDB Monitoring & Logging : Prometheus, Grafana, Sentry, Elastic Stack (ELK) π― Requirements & Qualifications π¨βπ» Experience 2-6 years of experience in building and deploying AI/ML systems, with at least 2+ years in NLP or voice technologies. Proven track record of production deployment of ASR, STT, NLP, or GenAI models. Hands-on experience building systems involving vector databases, real-time pipelines, or LLM integrations. π Educational Background Bachelor's or Master's in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Tier 1 institute preferred (IITs, BITS, IIITs, NITs, or global top 100 universities). βοΈ Technical Skills Strong coding experience in Python and familiarity with FastAPI/Django. Understanding of distributed architectures, memory management, and latency optimization. Familiarity with transformer-based model architectures, training techniques, and data pipeline design. π‘ Bonus Experience Worked on multilingual speech recognition and translation. Experience deploying AI models on edge devices or browsers. Built or contributed to open-source ML/NLP projects. Published papers or patents in voice, NLP, or deep learning domains. π What Success Looks Like in 6 Months Lead the deployment of a real-time STT + diarization system for at least 1 enterprise client. Deliver high-accuracy nudge generation pipeline using RAG and summarization models. Build an in-house knowledge indexing + vector DB framework integrated into the product. Mentor 2β3 AI engineers and own execution across multiple modules. Achieve <1 sec latency on real-time voice-to-nudge pipeline from capture to recommendation. πΌ What We Offer Compensation : Competitive fixed salary + equity + performance-based bonuses Impact : Ownership of key AI modules powering thousands of live enterprise conversations Learning : Access to high-compute GPUs, API credits, research tools, and conference sponsorships Culture : High-trust, outcome-first environment that celebrates execution and learning Mentorship : Work directly with founders, ex-Microsoft, IIT-IIM-BITS alums, and top AI engineers Scale : Opportunity to scale an AI product from 10 clients to 100+ globally within 12 months β οΈ This Role is NOT for Everyone π« If you're looking for a slow, abstract research roleβthis is NOT for you. π« If you're used to months of ideation before shippingβyou won't enjoy our speed. π« If you're not comfortable being hands-on and diving into scrappy buildsβyou may struggle. β But if youβre a builder , architect , and visionary βwho loves solving hard technical problems and delivering real-time AI at scale, we want to talk to you. π© How to Apply Send your CV, GitHub/portfolio, and a brief note on βWhy AI at Darwix?β to: π§ careers@cur8.in Subject Line: Application β AI Engineer β [Your Name] Include links to: Any relevant open-source contributions LLM/STT models you've fine-tuned or deployed RAG pipelines you've worked on π Final Thought This is not just a job. This is your opportunity to build the worldβs most scalable AI sales intelligence platform βfrom India, for the world. Show more Show less
Posted 5 days ago
4.0 - 7.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Role Expectations: Design, develop, and execute automated tests to ensure product quality in digital transformation initiatives. Collaborate with developers and business stakeholders to understand project requirements and define test strategies. Implement API testing using Mockito, Wiremock, and Stubs for effective validation of integrations. Utilize Kafka and MQ to test and monitor real-time data streaming scenarios. Perform automation testing using RestAssured, Selenium, and TestNG to ensure smooth delivery of applications. Leverage Splunk and AppDynamics for real-time monitoring, identifying bottlenecks, and diagnosing application issues. Create and maintain continuous integration/continuous deployment (CI/CD) pipelines using Gradle and Docker. Conduct performance testing using tools like Gatling and Jmeter to evaluate application performance and scalability. Participate in Test Management and Defect Management processes to track progress and issues effectively. Work closely with onshore teams and provide insights to enhance test coverage and overall quality. Qualifications: 4-7 years of relevant experience in QA automation and Java . Programming: Strong experience with Java 8 and above, including a deep understanding of the Streams API . Frameworks: Proficiency in SpringBoot and JUnit for developing and testing robust applications. API Testing: Advanced knowledge of RestAssured and Selenium for API and UI automation. Candidates must demonstrate hands-on expertise. CI/CD Tools: Solid understanding of Jenkins for continuous integration and deployment. Cloud Platforms: Working knowledge of AWS for cloud testing and deployment. Monitoring Tools: Familiarity with Splunk and AppDynamics for performance monitoring and troubleshooting. Defect Management: Practical experience with test management tools and defect tracking. Build & Deployment: Experience with Gradle for build automation and Docker for application containerization. SQL: Strong proficiency in SQL, including query writing and database operations for validating test results. Domain Knowledge: Prior experience in the Payments domain with a good understanding of the domain-specific workflows. Nice to Have: Data Streaming Tools: experience with Kafka (including basic queries and architecture) OR MQ for data streaming testing. Financial services or payments domain experience will be preferred. Frameworks: Experience with Apache Camel for message-based application integration. Performance Testing: Experience with Gatling and Jmeter for conducting load and performance testing. Show more Show less
Posted 5 days ago
10.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Position Overview: ShyftLabs is seeking an experienced Databricks Architect to lead the design, development, and optimization of big data solutions using the Databricks Unified Analytics Platform. This role requires deep expertise in Apache Spark, SQL, Python, and cloud platforms (AWS/Azure/GCP). The ideal candidate will collaborate with cross-functional teams to architect scalable, high-performance data platforms and drive data-driven innovation. ShyftLabs is a growing data product company that was founded in early 2020 and works primarily with Fortune 500 companies. We deliver digital solutions built to accelerate business growth across various industries by focusing on creating value through innovation. Job Responsibilities Architect, design, and optimize big data and AI/ML solutions on the Databricks platform. Develop and implement highly scalable ETL pipelines for processing large datasets. Lead the adoption of Apache Spark for distributed data processing and real-time analytics. Define and enforce data governance, security policies, and compliance standards. Optimize data lakehouse architectures for performance, scalability, and cost-efficiency. Collaborate with data scientists, analysts, and engineers to enable AI/ML-driven insights. Oversee and troubleshoot Databricks clusters, jobs, and performance bottlenecks. Automate data workflows using CI/CD pipelines and infrastructure-as-code practices. Ensure data integrity, quality, and reliability across all data processes. Basic Qualifications: Bachelorβs or Masterβs degree in Computer Science, Data Engineering, or a related field. 10+ years of hands-on experience in data engineering, with at least 5+ years in Databricks Architect and Apache Spark. Proficiency in SQL, Python, or Scala for data processing and analytics. Extensive experience with cloud platforms (AWS, Azure, or GCP) for data engineering. Strong knowledge of ETL frameworks, data lakes, and Delta Lake architecture. Hands-on experience with CI/CD tools and DevOps best practices. Familiarity with data security, compliance, and governance best practices. Strong problem-solving and analytical skills in a fast-paced environment. Preferred Qualifications: Databricks certifications (e.g., Databricks Certified Data Engineer, Spark Developer). Hands-on experience with MLflow, Feature Store, or Databricks SQL. Exposure to Kubernetes, Docker, and Terraform. Experience with streaming data architectures (Kafka, Kinesis, etc.). Strong understanding of business intelligence and reporting tools (Power BI, Tableau, Looker). Prior experience working with retail, e-commerce, or ad-tech data platforms. We are proud to offer a competitive salary alongside a strong insurance package. We pride ourselves on the growth of our employees, offering extensive learning and development resources. Show more Show less
Posted 5 days ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Teamwork makes the stream work. Roku is changing how the world watches TV Roku is the #1 TV streaming platform in the U.S., Canada, and Mexico, and we've set our sights on powering every television in the world. Roku pioneered streaming to the TV. Our mission is to be the TV streaming platform that connects the entire TV ecosystem. We connect consumers to the content they love, enable content publishers to build and monetize large audiences, and provide advertisers unique capabilities to engage consumers. From your first day at Roku, you'll make a valuable - and valued - contribution. We're a fast-growing public company where no one is a bystander. We offer you the opportunity to delight millions of TV streamers around the world while gaining meaningful experience across a variety of disciplines. About the team The mission of Roku's Data Engineering team is to develop a world-class big data platform so that internal and external customers can leverage data to grow their businesses. Data Engineering works closely with business partners and Engineering teams to collect metrics on existing and new initiatives that are critical to business success. As Senior Data Engineer working on Device metrics, you will design data models & develop scalable data pipelines to capturing different business metrics across different Roku Devices. About the role Roku pioneered streaming to the TV. We connect users to the streaming content they love, enable content publishers to build and monetise large audiences, and provide advertisers with unique capabilities to engage consumers. Roku streaming players and Roku TVβ’ models are available around the world through direct retail sales and licensing arrangements with TV brands and pay-TV operators.With tens of million players sold across many countries, thousands of streaming channels and billions of hours watched over the platform, building scalable, highly available, fault-tolerant, big data platform is critical for our success.This role is based in Bangalore, India and requires hybrid working, with 3 days in the office. What you'll be doing Build highly scalable, available, fault-tolerant distributed data processing systems (batch and streaming systems) processing over 10s of terabytes of data ingested every day and petabyte-sized data warehouse Build quality data solutions and refine existing diverse datasets to simplified data models encouraging self-service Build data pipelines that optimise on data quality and are resilient to poor quality data sources Own the data mapping, business logic, transformations and data quality Low level systems debugging, performance measurement & optimization on large production clusters Participate in architecture discussions, influence product roadmap, and take ownership and responsibility over new projects Maintain and support existing platforms and evolve to newer technology stacks and architectures We're excited if you have Extensive SQL Skills Proficiency in at least one scripting language, Python is required Experience in big data technologies like HDFS, YARN, Map-Reduce, Hive, Kafka, Spark, Airflow, Presto, etc. Proficiency in data modeling, including designing, implementing, and optimizing conceptual, logical, and physical data models to support scalable and efficient data architectures. Experience with AWS, GCP, Looker is a plus Collaborate with cross-functional teams such as developers, analysts, and operations to execute deliverables 5+ years professional experience as a data or software engineer BS in Computer Science; MS in Computer Science preferred Benefits Roku is committed to offering a diverse range of benefits as part of our compensation package to support our employees and their families. Our comprehensive benefits include global access to mental health and financial wellness support and resources. Local benefits include statutory and voluntary benefits which may include healthcare (medical, dental, and vision), life, accident, disability, commuter, and retirement options (401(k)/pension). Our employees can take time off work for vacation and other personal reasons to balance their evolving work and life needs. It's important to note that not every benefit is available in all locations or for every role. For details specific to your location, please consult with your recruiter. The Roku Culture Roku is a great place for people who want to work in a fast-paced environment where everyone is focused on the company's success rather than their own. We try to surround ourselves with people who are great at their jobs, who are easy to work with, and who keep their egos in check. We appreciate a sense of humor. We believe a fewer number of very talented folks can do more for less cost than a larger number of less talented teams. We're independent thinkers with big ideas who act boldly, move fast and accomplish extraordinary things through collaboration and trust. In short, at Roku you'll be part of a company that's changing how the world watches TV. We have a unique culture that we are proud of. We think of ourselves primarily as problem-solvers, which itself is a two-part idea. We come up with the solution, but the solution isn't real until it is built and delivered to the customer. That penchant for action gives us a pragmatic approach to innovation, one that has served us well since 2002. To learn more about Roku, our global footprint, and how we've grown, visit https://www.weareroku.com/factsheet. By providing your information, you acknowledge that you have read our Applicant Privacy Notice and authorize Roku to process your data subject to those terms. Show more Show less
Posted 5 days ago
16.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
```html About the Company Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. About the Role Development of new software functionality in Java. Support and maintenance of existing software. Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so. Responsibilities Development of new software functionality in Java Support and maintenance of existing software Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives Qualifications B.E./B Tech/MCA/M Tech / Bachelorβs degree Computer Science, Computer Engineering, or other related discipline (16+ years of education, correspondence courses are not relevant) 8+ years of experience working with programming, scripting languages such as Java, NodeJS, React 8+ years of experience with APIs / micro-services development 4+ years of experience with container technologies (Kubernetes, Docker, etc.) Experience with event streaming platforms such as Kafka Experience with Cloud platform (AWS, Azure), GitHub actions Experience with DevOps, CI / CD Experience with automated testing frameworks Experience with RDBMS, Snowflake, data bricks, SQL Server Full stack java development experience in container technologies, Kafka, Azure, GitHub Experience working with programming, scripting languages such as Java, NodeJS, React or Angular Experience with APIs / micro-services development Experience with container technologies (Kubernetes, Docker, etc.) Experience with monitoring tools like Splunk Experience with event streaming platforms such as Kafka Experience with AWS, Azure DevOps, GitHub actions Experience with DevOps, CI / CD Experience with automated testing frameworks Development experience utilizing Agile Required Skills Java NodeJS React APIs / micro-services development Container technologies (Kubernetes, Docker, etc.) Event streaming platforms (Kafka) Cloud platforms (AWS, Azure) DevOps, CI / CD Automated testing frameworks RDBMS, Snowflake, data bricks, SQL Server Preferred Skills Full stack java development experience Experience with monitoring tools like Splunk Pay range and compensation package At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyoneβof every race, gender, sexuality, age, location and incomeβdeserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes β an enterprise priority reflected in our mission. Equal Opportunity Statement We are committed to diversity and inclusivity. ``` Show more Show less
Posted 5 days ago
5.0 years
0 Lacs
India
Remote
Company Description CodeChavo is a global digital transformation solutions provider for remote location. The company partners with clients from design to operation, embedding innovation and agility into organizations. CodeChavo works closely with top technology companies to make a real impact through transformation and helps companies outsource their digital projects and build quality tech teams. Role Description This is a full-time on-site role for a Software Architect located remotely The Software Architect will be responsible for designing and developing high-level software solutions, defining software architecture strategies, and ensuring that software meets all requirements of quality, scalability, and performance. Daily tasks include collaborating with development teams, reviewing code and design patterns, providing technical leadership, and managing the full software development life cycle. Experience : 5+ years Location: Remote (WFO); May go have to client office in Bengaluru initially for training. Qualifications Software Architecture and Software Design skills Experience in Code reviews, code optimzations. Skilled in Python Django, Java or related frameworks Skilled in Kafka and Kubernetes. Understanding and application of Design Patterns Strong problem-solving skills and ability to work collaboratively Excellent written and verbal communication skills Experience in the tech industry is a plus Bachelor's degree in Computer Science, Engineering, or related field Show more Show less
Posted 5 days ago
15.0 years
0 Lacs
India
On-site
Qualifications And Skills Bring over 15 years of hands-on experience in .NET development using C#, and .NET Core. Demonstrate strong expertise in architectural design patterns and best practices. Leverage your experience in Azure cloud-native development, including Azure Functions, App Services, Kubernetes (AKS), Logic Apps, and API Management. Showcase your skills in microservices architecture, containerization (Docker, Kubernetes). Exhibit strong database expertise in SQL Server and NoSQL databases (CosmosDB). Prove proficiency in DevOps practices, including CI/CD pipelines (Azure DevOps, Terraform, GitHub Actions). Bring experience in Enterprise Integration Patterns, Event-driven architecture, and messaging systems (Kafka, RabbitMQ). Demonstrate excellent problem-solving, analytical, and debugging skills. Communicate effectively with stakeholders across technical and business teams. Thrive in a fast-paced, agile environment. Possess a passion for technology innovation and continuous learning. Preferred Qualifications Experience in large-scale enterprise applications. Knowledge of AI/ML integration in .NET solutions. Certifications in Microsoft Azure. NVP Certified Skills: analytical skills,architecture,azure,.net development,debugging skills,sql server,app services,kubernetes (aks),cloud,event-driven architecture,devops practices,microservices architecture,messaging systems (kafka, rabbitmq),containerization (docker, kubernetes),nosql databases (cosmosdb),azure cloud-native development,devops,problem-solving,enterprise integration patterns,communication,azure functions,logic apps,architectural design patterns,.net core,c#,ci/cd pipelines (azure devops, terraform, github actions),api management,skills Show more Show less
Posted 5 days ago
7.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Hiring for JAVA Backend Developer 7+ years java development experience 3+ years spring boot experience 3+ years experience in API designing and development 2+ years experience in CI/CD automation 2+ years experience in Axon/kafka/rabbimq 2+ years experience in mysql/posgress/oracle and mongodb/cassandra database Atleast 1 year experience in pcf/asw/azure Strong focus on automation of processes, testing, and data validations Actively troubleshoot obstacles for team projects related to code deployments, database platforms, etc Exceptional communication practices and ability to network across multiple geographic team locations. Execute and automate test cases, and perform bug tracking Show more Show less
Posted 5 days ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Develop and implement AI and machine learning strategies for several healthcare domains Collaborate with cross-functional teams to identify and prioritize AI and machine learning initiatives Develop and run pipelines for data ingress and model output egress Develop and run scripts for ML model inference Design, implement, and maintain CI/CD pipelines for MLOps and DevOps functions Identify technical problems and develop software updates and fixes Develop scripts or tools to automate repetitive tasks Automate the provisioning and configuration of infrastructure resources Provide guidance on the best use of specific tools or technologies to achieve desired results Create documentation for infrastructure design and deployment procedures Utilize AI/ML frameworks and tools such as MLFlow, TensorFlow, PyTorch, Keras, Scikit-learn, etc. Lead and manage AI/ML teams and projects from ideation to delivery and evaluation Apply expertise in various AI/ML techniques, including deep learning, NLP, computer vision, recommender systems, reinforcement learning, and large language models Communicate complex AI/ML concepts and results to technical and non-technical audiences effectively Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Qualifications - External - Undergraduate degree or equivalent experience. β’5+ years of experience working with Python, AI β’4+ years of experience with SQL Server, MYSQL, Oracle or other comparable RDMS database system β’2+ years of experience with APIs / micro-services β’2+ years of experience with CI/CD tools like Jenkins, GitHub Actions β’1+ years of experience with code scanning and security tools for code vulnerability, code quality, secret scanning, penetration testing and threat modeling Preferred Qualifications: β’Bachelorβs degree in computer science or related field β’Experience with unit testing frameworks β’Experience with version control systems like Git/GitHub β’Proven ability to do POC on emerging tech-stack β’Experience with Linux or Unix platform β’Experience with RabbitMQ β’Experience with Redis β’Proven ability to independently troubleshoot problems and document RCAs β’Experience with event steaming platforms such as Kafka Show more Show less
Posted 5 days ago
5.0 years
0 Lacs
Ahmedabad, Gujarat, India
Remote
About Qualitrol Qualitrol is a leader in providing condition monitoring solutions for the electricity industry, ensuring reliability and efficiency in high-voltage electrical assets. We leverage cutting-edge technology, data analytics, and AI to transform how utilities manage their assets and make data-driven decisions. Role Summary We are looking for a highly skilled Senior Data Engineer to join our team and drive the development of our data engineering capabilities. This role involves designing, developing, and maintaining scalable data pipelines, optimizing data infrastructure, and ensuring high-quality data for analytics and AI-driven solutions. The ideal candidate will have deep expertise in data modeling, cloud-based data platforms, and best practices in data engineering. Key Responsibilities Design, develop, and optimize scalable ETL/ELT pipelines for large-scale industrial data. Architect and maintain data warehouses, lakes, and streaming solutions to support analytics and AI-driven insights. Implement data governance, security, and quality best practices to ensure data integrity and compliance. Work closely with Data Scientists, AI Engineers, and Software Developers to build robust data solutions. Optimize data infrastructure performance for real-time and batch processing. Leverage cloud-based technologies (AWS, Azure, GCP) to develop and deploy scalable data solutions. Develop and maintain APIs and data access layers for seamless integration across platforms. Collaborate with cross-functional teams to define and implement data strategy and architecture. Stay up to date with emerging data engineering technologies and best practices. Required Qualifications & Experience 5+ years of experience in data engineering, software development, or related fields. Proficiency in programming languages such as Python, Scala, or Java. Expertise in SQL and database technologies (PostgreSQL, MySQL, NoSQL, etc.). Hands-on experience with big data technologies (e.g., Spark, Kafka, Hadoop). Strong understanding of data warehousing (e.g., Snowflake, Redshift, BigQuery) and data lake architectures. Experience with cloud platforms (AWS, Azure, or GCP) and cloud-native data solutions. Knowledge of CI/CD pipelines, DevOps, and infrastructure as code (Terraform, Kubernetes, Docker). Familiarity with ML Ops and AI-driven data workflows is a plus. Strong problem-solving skills, ability to work independently, and excellent communication skills. Preferred Qualifications Experience in the electricity, utilities, or industrial sectors. Knowledge of IoT data ingestion and edge computing. Familiarity with GraphQL and RESTful API development. Experience in data visualization and business intelligence tools (Power BI, Tableau, etc.). Contributions to open-source data engineering projects. What We Offer Competitive salary and performance-based incentives. Comprehensive benefits package, including health, dental, and retirement plans. Opportunities for career growth and professional development. A dynamic work environment focused on innovation and cutting-edge technology. Hybrid/remote work flexibility (depending on location and project needs). How To Apply Interested candidates should submit their resume and a cover letter detailing their experience and qualifications. Fortive Corporation Overview Fortiveβs essential technology makes the world stronger, safer, and smarter. We accelerate transformation across a broad range of applications including environmental, health and safety compliance, industrial condition monitoring, next-generation product design, and healthcare safety solutions. We are a global industrial technology innovator with a startup spirit. Our forward-looking companies lead the way in software-powered workflow solutions, data-driven intelligence, AI-powered automation, and other disruptive technologies. Weβre a force for progress, working alongside our customers and partners to solve challenges on a global scale, from workplace safety in the most demanding conditions to groundbreaking sustainability solutions. We are a diverse team 17,000 strong, united by a dynamic, inclusive culture and energized by limitless learning and growth. We use the proven Fortive Business System (FBS) to accelerate our positive impact. At Fortive, we believe in you. We believe in your potentialβyour ability to learn, grow, and make a difference. At Fortive, we believe in us. We believe in the power of people working together to solve problems no one could solve alone. At Fortive, we believe in growth. Weβre honest about whatβs working and what isnβt, and we never stop improving and innovating. Fortive: For you, for us, for growth. About Qualitrol QUALITROL manufactures monitoring and protection devices for high value electrical assets and OEM manufacturing companies. Established in 1945, QUALITROL produces thousands of different types of products on demand and customized to meet our individual customersβ needs. We are the largest and most trusted global leader for partial discharge monitoring, asset protection equipment and information products across power generation, transmission, and distribution. At Qualitrol, we are redefining condition-based monitoring. We Are an Equal Opportunity Employer. Fortive Corporation and all Fortive Companies are proud to be equal opportunity employers. We value and encourage diversity and solicit applications from all qualified applicants without regard to race, color, national origin, religion, sex, age, marital status, disability, veteran status, sexual orientation, gender identity or expression, or other characteristics protected by law. Fortive and all Fortive Companies are also committed to providing reasonable accommodations for applicants with disabilities. Individuals who need a reasonable accommodation because of a disability for any part of the employment application process, please contact us at applyassistance@fortive.com. Bonus or Equity This position is also eligible for bonus as part of the total compensation package. QUALITROL manufactures monitoring and protection devices for high value electrical assets and OEM manufacturing companies. Established in 1945, QUALITROL produces thousands of different types of products on demand and customized to meet our individual customersβ needs. We are the largest and most trusted global leader for partial discharge monitoring, asset protection equipment and information products across power generation, transmission, and distribution. At Qualitrol, we are redefining condition-based monitoring. We Are an Equal Opportunity Employer. Fortive Corporation and all Fortive Companies are proud to be equal opportunity employers. We value and encourage diversity and solicit applications from all qualified applicants without regard to race, color, national origin, religion, sex, age, marital status, disability, veteran status, sexual orientation, gender identity or expression, or other characteristics protected by law. Fortive and all Fortive Companies are also committed to providing reasonable accommodations for applicants with disabilities. Individuals who need a reasonable accommodation because of a disability for any part of the employment application process, please contact us at applyassistance@fortive.com. This position is also eligible for bonus as part of the total compensation package. Show more Show less
Posted 5 days ago
5.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Weβre Hiring: MLOps Engineer (Azure) harshita.panchariya@tecblic.com Location: Ahmedabad, Gujarat Experience: 3β5 Years Employment Type : Full-Time * An immediate joiner will be preferred. Job Summary: We are seeking a skilled and proactive MLOps/DataOps Engineer with strong experience in the Azure ecosystem to join our team. You will be responsible for streamlining and automating machine learning and data pipelines, supporting scalable deployment of AI/ML models, and ensuring robust monitoring, governance, and CI/CD practices across the data and ML lifecycle. Key Responsibilities MLOps : Design and implement CI/CD pipelines for machine learning workflows using Azure DevOps, GitHub Actions, or Jenkins. Automate model training, validation, deployment, and monitoring using tools such as Azure ML, MLflow, or KubeFlow. Manage model versioning, performance tracking, and rollback strategies. Integrate machine learning models with APIs or web services using Azure Functions, Azure Kubernetes Service (AKS), or Azure App Services. DataOps Design, build, and maintain scalable data ingestion, transformation, and orchestration pipelines using Azure Data Factory, Synapse Pipelines, or Apache Airflow. Ensure data quality, lineage, and governance using Azure Purview or other metadata management tools. Monitor and optimize data workflows for performance and cost efficiency. Support batch and real-time data processing using Azure Stream Analytics, Event Hubs, Databricks, or Kafka. DevOps & Infrastructure Provision and manage infrastructure using Infrastructure-as-Code tools such as Terraform, ARM Templates, or Bicep. Set up and manage compute environments (VMs, AKS, AML Compute), storage (Blob, Data Lake Gen2), and networking in Azure. Implement observability using Azure Monitor, Log Analytics, Application Insights, and Skills : Strong hands-on experience with Azure Machine Learning, Azure Data Factory, Azure DevOps, and Azure Storage solutions. Proficiency in Python, Bash, and scripting for automation. Experience with Docker, Kubernetes, and containerized deployments in Azure. Good understanding of CI/CD principles, testing strategies, and ML lifecycle management. Familiarity with monitoring, logging, and alerting in cloud environments. Knowledge of data modeling, data warehousing, and SQL. Preferred Qualifications Azure Certifications (e.g., Azure Data Engineer Associate, Azure AI Engineer Associate, or Azure DevOps Engineer Expert). Experience with Databricks, Delta Lake, or Apache Spark on Azure. Exposure to security best practices in ML and data environments (e.g., identity management, network security). Soft Skills Strong problem-solving and communication skills. Ability to work independently and collaboratively with data scientists, ML engineers, and platform teams. Passion for automation, optimization, and driving operational excellence. harshita.panchariya@tecblic.com Show more Show less
Posted 5 days ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Urgent Hiring: React JS, Java, Angular, DevOps & QA Automation Roles | Bangalore, Gurgaon, Chennai π We're Hiring β Multiple Roles Across Locations! π Face-to-Face Interviews β This Saturday (14th June 2025) We're hiring experienced tech professionals for mission-critical roles with leading clients. π Locations: Bangalore | Gurgaon | Chennai πΌ Full-time Positions | Immediate Joiners Preferred πΉ React JS Developer (7β10 Yrs) β Bangalore / Gurgaon React.js, AG Grid/Charts, Web SDK, GitLab, JIRA πΉ Java Developer (Java 11/17) (6+ Yrs) β Gurgaon Java 11+, Microservices, Kafka, Spring Boot, CI/CD πΉ Angular 13+ Web Developer (6+ Yrs) β Bangalore Angular, TypeScript, UX/UI, Git, Agile πΉ DevOps Engineer (6β9 Yrs) β Chennai / Bangalore / Gurgaon CI/CD, Python, AWS (ROSA), Terraform, GitLab πΉ Cypress / Playwright Automation Engineer β Chennai / Bangalore UI/API/Performance Automation, JavaScript/Java, CI/CD, AWS π Walk-in / Face-to-Face Interviews ποΈ Date: Saturday, 14th June 2025 π© Interested? Drop your resume at careers@talentwavesystems.com or DM us to schedule your slot. π Follow Talent Wave Systems for more insights and the latest hiring opportunities! Show more Show less
Posted 5 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2