Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
About Ameyaa NXT Ameyaa NXT is a fast-moving startup on a mission to leverage AI and data-driven innovation to build transformative products that shape the future. We are a small, ambitious team where every member’s contribution directly impacts the company’s trajectory. If you thrive in dynamic environments and want to build something truly impactful from the ground up, this is the place for you. Role Overview We are seeking a Senior AI/ML Engineer who is not only technically exceptional but also deeply committed to the startup journey—someone who’s ready to take ownership, experiment boldly, and push boundaries. You will lead AI/ML initiatives, from research and model development to deployment and scaling, ensuring our solutions are robust, scalable, and aligned with our vision. We value grit, execution, and creative problem-solving as much as technical expertise. For the right candidate, we are open to offering equity to align your growth with ours. ⸻ Key Responsibilities • Lead the design, development, and deployment of cutting-edge AI/ML models and systems. • Research and experiment with new algorithms, architectures, and frameworks to solve high-impact business problems. • Collaborate closely with product, engineering, and design teams to integrate AI capabilities into our products. • Optimize model performance, scalability, and inference speed for real-world usage. • Build robust data pipelines for training, evaluation, and monitoring. • Stay ahead of industry trends, tools, and research to keep Ameyaa NXT at the forefront of AI innovation. • Mentor junior engineers and guide AI/ML best practices across the team. • Work in a fast-paced startup environment, taking ownership of projects from idea to production. ⸻ Requirements Technical Skills: • Strong background in machine learning, deep learning, and AI frameworks (TensorFlow, PyTorch, Hugging Face, etc.). • Expertise in at least one of: NLP, Computer Vision, Generative AI, Reinforcement Learning, or Predictive Analytics. • Solid understanding of data preprocessing, feature engineering, and model optimization. • Proficiency in Python and relevant libraries (NumPy, pandas, scikit-learn). • Experience deploying models in production using FastAPI, Flask, or similar. • Familiarity with cloud platforms (AWS, GCP, Azure) and containerization (Docker, Kubernetes). Mindset & Experience: • 3+ years of experience in AI/ML engineering, with at least one major project deployed in production. • Experience in a startup or high-growth environment is highly preferred. • Strong problem-solving skills and ability to thrive under uncertainty. • Passion for building, iterating, and improving products rapidly. • Willingness to take on challenges outside the strict AI/ML scope when needed. ⸻ What We Offer • Competitive salary & equity options for the right candidate. • Flexible working environment with autonomy to own your work. • Opportunity to shape the AI vision of a growing startup from the ground up. • Access to cutting-edge tools, datasets, and experimentation resources. • A culture of innovation, ownership, and collaboration. ⸻ If you’re looking for just another job, this isn’t it. If you want to build something impactful, wear multiple hats, and grow alongside a disruptive startup—let’s talk.
Posted 2 days ago
2.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description You’re ready to gain the skills and experience needed to grow within your role and advance your career — and we have the perfect software engineering opportunity for you. As a Software Engineer II at JPMorgan Chase within the Infrastructure Platforms team, you will be part of an agile team dedicated to enhancing, designing, and delivering software components for the firm's cutting-edge technology products in a secure, stable, and scalable manner. In your role as an emerging member of the software engineering team, you will be responsible for executing software solutions through the design, development, and technical troubleshooting of various components within a technical product, application, or system, while acquiring the skills and experience necessary for professional growth. Job Responsibilities Administer and maintain Linux-based systems and Infrastructure as Code tools across development, testing, and production environments. Design, deploy, and manage containerized applications using Docker and Kubernetes (K8s). Develop and maintain automation scripts using Ansible for system configuration and application deployment. Create and manage batch job workflows using Control-M, ensuring reliability and performance. Write and maintain Python scripts for automation, integration, and monitoring purposes. Collaborate with cross-functional teams, including development, QA, and operations, to support CI/CD processes. Troubleshoot and resolve issues related to infrastructure, deployment, and automation. Strong analytical and problem-solving skills, with the ability to work independently or as part of a team. Required Qualifications, Capabilities, And Skills Formal training or certification on software engineering concepts and 2+ years applied experience Hands-on practical experience in system design, application development, testing, and operational stability Hands-on experience with the Linux operating system, including system administration and troubleshooting. Experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages and database querying languages Strong understanding of containerization technologies such as Docker and Kubernetes. Proven experience in Ansible scripting for infrastructure automation. Hands-on experience in writing and maintaining Control-M job scripts. Proficiency in Python and GO scripting for various automation tasks. Good understanding of CI/CD practices, version control (e.g., Git), and the software development lifecycle. Preferred Qualifications, Capabilities, And Skills Experience working in cloud environments (e.g., AWS, Azure, GCP). Exposure to monitoring and logging tools such as Prometheus, Grafana, or the ELK stack. Familiarity with ITIL or other service management frameworks. ABOUT US
Posted 2 days ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Design, implement, and maintain CI/CD pipelines using GitHub Actions and Jenkins, ensuring seamless integration and deployment workflows Enhance and manage existing Jenkins pipelines, optimizing for performance, reliability, and scalability Develop and maintain Terraform scripts for infrastructure provisioning and automation across environments Build, configure, and manage Azure Public Cloud resources, ensuring secure and efficient infrastructure operations Collaborate with development and operations teams to automate manual processes and improve system reliability Create and maintain scripts and utilities using Python, Bash, or PowerShell to support automation and monitoring Integrate AI-driven observability using Dynatrace Davis AI to proactively detect and resolve system anomalies Leverage GitHub Copilot to accelerate development of automation scripts and infrastructure code Monitor system performance, troubleshoot issues, and implement proactive solutions to ensure high availability Maintain comprehensive documentation for infrastructure, processes, and automation tools Participate in on-call rotations and incident response activities as needed Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent practical experience) 6+ years of hands-on experience in DevOps, Site Reliability Engineering, or related roles Experience working with Azure Public Cloud, including resource provisioning, networking, and security Experience using GitHub Copilot to enhance development productivity and code quality Solid expertise in GitHub Actions and Jenkins, with proven experience in pipeline development and maintenance Solid understanding of Terraform and Infrastructure as Code (IaC) principles Proficiency in scripting languages such as Python, Bash, or PowerShell Familiarity with AI-based monitoring tools, especially Dynatrace Davis AI, for intelligent observability and incident prediction Basic programming knowledge to develop automation tools and utilities Proven excellent problem-solving skills and ability to work independently or as part of a team Proven solid communication and documentation skills Preferred Qualification Familiarity with containerization tools like Docker and orchestration platforms like Kubernetes At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Posted 2 days ago
4.0 years
0 Lacs
India
On-site
This role requires Core Work Hours from 8:30 pm to 12:30 am IST (will change as per daylight saving in US). The remaining hours are completed at individual discretion.** A Bit About You A recognized expert in their professional discipline, with significant impact and influence on organizational policy and program development. Establishes critical strategic and operational goals, and develops and implements new products, processes, standards, or operational plans to achieve organizational objectives. Regularly leads projects of critical importance to the organization—projects that carry substantial consequences for success or failure. Requires strong influence and communication with executive leadership. Problems encountered are often complex and multidimensional, requiring broad-based consideration of variables that affect multiple areas of the organization. You will: Design, architect and create the documentation for the entire system down to the details to meet team needs Proactively create and review team contributions of documentation within your domain of expertise Code entire software solutions to solve current problems, identify and fix issues within their areas of expertise Participate as a CODEOWNER within their expertise and stakeholder throughout in code-reviews Automate unit, integration, and end to end testing solutions and incorporate with the testing team flow. Be able to run your code in pre-production and ensure quality Deploy your solutions across environments and platforms, including production releases Teardown and destroy old solutions, products, and resources when no longer needed Provide operational support of your deployed code and all code within your domain of expertise Determine issues within the entire team and prevent problems from occurring Coordinate across all business teams to identify, resolve, mitigate and prevent technical issues, risks, and provide solutions Perform other job-related duties as assigned You Have: 4 year degree in Computer Science or related field with 7+ years of experience OR 9+ years of experience in Software Development 7+ years of experience in software development 5+ years of experience in AWS (EC2, Lambda, S3, CloudWatch) 3+ years (GitHub, Jenkins, Spinnaker) 5+ years of experience in DevOps deployments with Azure/AWS services, CI/CD pipelines, and containerization (Docker/Kubernetes) Proficient in building server-side rendered (SSR) and static sites using Next.js Experience in designing and integrating RESTful APIs for web applications Deep proficiency in React, nextjs, TypeScript (Demonstrate ability to build complex and performant web applications using these technologies) Solid understanding of HTML5 and CSS3, including semantic HTML, responsive design principles Proven experience with website performance optimization techniques including splitting, lazy loading, image optimization, caching techniques In-depth knowledge of SEO principles and best practices for public-facing websites Significant experience working with mapping libraries, specifically Mapbox GL JS. Demonstrated ability to handle large datasets and implement performant map visualizations Ability to write efficient, scalable, and maintainable code Excellent problem solving, debugging, and analytical skills Ability to work independently and as part of a team Strong communication and collaboration skills with the ability to effectively communicate technical concepts to both technical and non-technical audiences. Would be great if you have: Experience in ingesting and exposing large datasets in the weather industry Experience, in general, in the advertising industry Experience in C# and .NET Core/.NET Framework for backend development Experience in CFN, APIGateway, DDB, IAM, SNS Understanding of technical SEO, Core Web Vitals, and best practices for search engine ranking Experience in integrating Mapbox or other mapping services into web applications Experience in designing scalable and modular web architectures Familiarity with unit testing, integration testing, and debugging techniques Experience with CICD tools, specifically GitHub, Jenkins, Spinnaker, Artifactory Experience with AWS Products, specifically CloudFormation, DynamoDB, SNS, APIGateway, Route53, IAM, Tagging Experience working with formats like TXT, CSV, MARKDOWN, JSON, YAML, GeoJSON, WKT, GZIP, PARQ Experience with other languages, frameworks, and platforms, specifically Infrastructure as Code, Java, HTML, Javascript, Typescript, NodeJS, Python, Shell, React, C You are: A team player who is organized, flexible and willing to adapt Not afraid of new technologies and driven to learn A detail-oriented person, who catches problems early and adjusts A strong communicator who is able to collaborate with multiple business and engineering stakeholders and work through conflicting needs A problem solver who likes to dive deep into a problem, diagnose root causes and work with multiple teams to come up with a solution Organized with demonstrated ability to prioritize and deliver timely work A team player and not afraid to roll up your sleeves and help when needed Self-sufficient and not afraid to take the lead and manage tasks independently Coachable and open to feedback Respectful--we treat each other with respect and assume the best of one another Not afraid to have fun! Benefits What we offer: At Weatherbug, we want our employees to be comfortable with their benefits so they can focus on doing the work they love. Parental leave- Maternity and Paternity Flexible Time Offs (Earned Leaves, Sick Leaves, Birthday leave, Bereavement leave & Company Holidays) In Office Daily Catered Breakfast, Lunch, Snacks and Beverages Health cover for any hospitalization. Covers both nuclear family and parents Tele-med for free doctor consultation, discounts on health checkups and medicines Wellness/Gym Reimbursement Pet Expense Reimbursement Childcare Expenses and reimbursements Employee referral program Education reimbursement program Skill development program Cell phone reimbursement (Mobile Subsidy program) Internet reimbursement/Postpaid cell phone bill/or both Birthday treat reimbursement Employee Provident Fund Scheme offering different tax saving options such as Voluntary Provident Fund and employee and employer contribution up to 12% Basic Creche reimbursement Co-working space reimbursement National Pension System employer match Meal card for tax benefit Special benefits on salary account
Posted 2 days ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Role Overview: We are seeking a skilled Java Backend Developer for an immediate, on-site opportunity in Bangalore. The ideal candidate will be well-versed in modern Java versions and backend frameworks, with a proven track record in designing robust, scalable microservices architectures. Key Responsibilities: *Develop, test, and maintain backend services using Java (8 and 17), Spring Boot, and Spring WebFlux. *Design scalable microservices leveraging SOLID principles and best practices in data structures and algorithms. *Work with both SQL and NoSQL databases to structure, optimize, and secure data storage and access. *Implement, deploy, and manage applications on AWS and/or Azure cloud platforms. *Utilize Docker and Kubernetes, including AKS (Azure Kubernetes Service), for containerization and orchestration of backend services. *Collaborate with cross-functional teams for designing, developing, and deploying new features. *Troubleshoot complex production issues and optimize system performance and reliability.
Posted 2 days ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description We are seeking experienced Senior Java Backend Developers to join our growing teams in Mumbai and Hyderabad. As a backend specialist, you will design, develop, and maintain scalable server-side applications and microservices using the latest Java technologies. Key Responsibilities *Design, develop, and maintain backend systems and microservices using Java and Spring Boot. *Build robust and scalable RESTful APIs. *Work with SQL databases (MySQL, PostgreSQL) and NoSQL databases (MongoDB, Cassandra). *Version control using Git. *Implement CI/CD pipelines and containerization with Docker and Kubernetes. *Collaborate with cross-functional teams to define and deliver new features. *Ensure performance, quality, and responsiveness of applications. *Troubleshoot and resolve technical issues across development, testing, and production environments.
Posted 2 days ago
6.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Role Summary The Enterprise Cloud Analyst – L2 will manage and optimize AWS and Azure cloud environments, ensuring high availability, security, and cost efficiency. The role involves provisioning, monitoring, and supporting both public and private cloud infrastructure, working closely with cross-functional teams to deliver reliable, scalable solutions. The position is part of the Infrastructure Services Team, supporting business objectives through world-class infrastructure operations. Responsibilities Manage AWS/Azure VM environments, applying best practices for deployment and maintenance. Provision, monitor, and automate using Terraform, CloudFormation, Docker, Puppet, and Python scripts. Support public/private cloud migrations and optimize system performance. Implement security policies, capacity planning, and cost optimization strategies. Use monitoring tools (Nagios, New Relic, AWS CloudWatch, Grafana) for proactive issue resolution. Collaborate with internal teams to ensure timely project delivery. Maintain system documentation and participate in on-call rotations. Requirements Minimum 6+ years in cloud infrastructure management (AWS & Azure). Strong knowledge of Microsoft services (AD, DNS, DHCP, Azure AD) and Linux administration. Database experience (MSSQL, MySQL) with monitoring and maintenance skills. Proficiency in Python, Shell, and PowerShell scripting. Familiarity with DevOps tools, automation, and middleware technologies. Experience with on-premise to cloud migrations and data center infrastructure. Strong communication, teamwork, and problem-solving abilities. Certifications Required: RedHat Certification, AWS Certified Solutions Architect Desirable: AZ-104 Microsoft Azure Administrator, MCSE Cloud Platform & Infrastructure
Posted 2 days ago
6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Who you’ll be working with: Better’s Engineering team is focused on re-imagining and rebuilding one of the most stressful, broken processes out there: buying a home. From our home shopping experience to our loan platform, our engineers solve complex technical problems at scale. Our customers are at the center of all we do, and our teams are dedicated to delivering the best experience for a life-changing process. We believe iteration speed is a competitive advantage in a legacy industry, so we move fast and deploy often. Despite our tremendous growth in 2021, every new engineer has a significant opportunity to help us shape the technical direction and culture of the team. What you’ll be doing: Building software that streamlines and demystifies the single most important purchase of our user's life Automating manual or paper processes that historically take days or weeks Working with business stakeholders to understand requirements and make technical decisions that have a long-lasting impact Laying the groundwork for new platforms or new products we're building as we become the de-facto homeownership destination Collaborate closely with teammates across Product, Design, and other Engineering to build better products for our customers. You’ll be assigned to a pod where we bring specialists with unique sets of skills and diverse competencies together to build the best, scalable solutions Opportunities to work as full stack, backend, or frontend engineer, depending on candidate interest and business need Who you are: 6+ years of experience in coding in at least one modern language such as Javascript, Typescript, Python, Java, Scala, Go 2+ years of experience contributing to the architecture and design of new and existing systems (architecture, design patterns, reliability, and scaling) You’ve built sophisticated production software systems Understanding of microservices architecture and event-driven distributed systems Strong grasp of relational database concepts Startup mindset, ownership, and a proper balance of quality and sense of urgency Better technology: We continuously ship code to production 50-100 times every day Services written in Node.js, Python and Go that expose APIs using GraphQL or OpenAPI React, CSS in JS, Apollo/GraphQL, webpack, SCSS, Ember.js on the frontend TypeScript / ES7 across the stack Postgres for our relational databases, ActiveMQ for our message broker, Redis for caching, Snowflake for analytics Kubernetes, Docker, and Terraform for deployment and devops AWS infrastructure, leveraging EC2, S3, CloudFront, Route53, and much more
Posted 2 days ago
1.0 - 2.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About TripleLift We're TripleLift, an advertising platform on a mission to elevate digital advertising through beautiful creative, quality publishers, actionable data and smart targeting. Through over 1 trillion monthly ad transactions, we help publishers and platforms monetize their businesses. Our technology is where the world's leading brands find audiences across online video, connected television, display and native ads. Brand and enterprise customers choose us because of our innovative solutions, premium formats, and supportive experts dedicated to maximizing their performance. As part of the Vista Equity Partners portfolio, we are NMSDC certified, qualify for diverse spending goals and are committed to economic inclusion. Find out how TripleLift raises up the programmatic ecosystem at triplelift.com. Role TripleLift, a fast-growing AdTech platform tackling some of the most challenging problems in the world of digital advertising, is seeking a skilled and enthusiastic Data Analyst to join our Data Science and Analytics team. The Data Analyst will play a key role in analyzing datasets, deriving insights, and contributing to data-driven decision-making across the organization. The ideal candidate will have a foundational background in statistical analysis, data manipulation, and data visualization techniques. They should possess good problem-solving skills, attention to detail, and the ability to communicate findings effectively to stakeholders. Responsibilities Perform in-depth analysis of datasets to identify trends and patterns Support the design and execution of experiments to test hypotheses Interpret and communicate findings through reports, presentations, and data visualizations Provide data-driven insights to support decision-making processes Conduct ad-hoc analyses and provide analytical support to various departments as needed Assist in the development and implementation of analytical models to extract meaningful insights from data Collaborate with cross-functional teams to understand project objectives and data requirements Contribute to the enhancement of data infrastructure, including data collection and storage Stay up-to-date on emerging trends, tools, and technologies in data analysis and data science Qualifications Bachelor's degree, or higher, in Statistics, Mathematics, Computer Science, Economics, or a related field Minimum of 1-2 years of experience in data analysis, with a proven track record of contributing to insights and business impact Proficiency in SQL for data extraction and manipulation Experience with Python for statistical analysis Strong analytical and quantitative skills, with a basic understanding of statistical methods and hypothesis testing Experience working with datasets and using data visualization tools such as Looker, Power BI, or Matplotlib Basic understanding of data warehousing concepts and experience working with relational databases (e.g., MySQL, PostgreSQL) Familiarity with basic monitoring and measuring statistical modeling performance with ability to build basic dashboards using tools like Grafana, Looker, etc Good communication skills, with the ability to convey technical concepts to non-technical stakeholders Ability to work independently and collaboratively in a fast-paced environment, managing multiple priorities and deadlines effectively Strong attention to detail and a commitment to delivering quality work Additional Preferred Skills Exposure to machine learning techniques and algorithms. Prior AdTech experience. Familiarity with big data technologies such as Spark or Kafka. Experience with cloud platforms such as AWS, or other cloud platforms. Technologies From Our Early Days, We’ve Always Believed In Using The Right Tools For The Right Job And Continue To Explore New Technology Options As We Grow. The Data Science And Analytics Team Uses The Following Technologies At TripleLift Languages: Python Frameworks: Spark, DataBricks, ONNX, Docker, Airflow Databases: MySQL, Snowflake, S3/Parquet Amazon Web Services to keep everything running Life at TripleLift At TripleLift, we’re a team of great people who like who they work with and want to make everyone around them better. This means being positive, collaborative, and compassionate. We hustle harder than the competition and are continuously innovating. Learn more about TripleLift and our culture by visiting our LinkedIn Life page. Establishing People, Culture and Community Initiatives At TripleLift, we are committed to building a culture where people feel connected, supported, and empowered to do their best work. We invest in our people and foster a workplace that encourages curiosity, celebrates shared values, and promotes meaningful connections across teams and communities. We want to ensure the best talent of every background, viewpoint, and experience has an opportunity to be hired, belong, and develop at TripleLift. Through our People, Culture, and Community initiatives, we aim to create an environment where everyone can thrive and feel a true sense of belonging. Privacy Policy Please see our Privacy Policies on our TripleLift and 1plusX websites. TripleLift does not accept unsolicited resumes from any type of recruitment search firm. Any resume submitted in the absence of a signed agreement will become the property of TripleLift and no fee shall be due.
Posted 2 days ago
2.0 years
4 - 8 Lacs
Mumbai Metropolitan Region
On-site
Profile: MERN Stack Developer 📍 Location: Andheri East, Mumbai 🏤 Work Mode: 5 Days WFO ⏰ Experience: 2+ Years (Only immediate joiners & candidates who have completed notice period) What We're Looking For ✅ 2+ years of MERN stack development experience ✅ MongoDB - database design and complex queries ✅ Express.js - server-side application development ✅ React.js - component-based UI development ✅ Node.js - backend JavaScript runtime ✅ Kafka - event streaming and messaging ✅ Docker - containerization and deployment ✅ Redis - caching and session management ✅ RESTful API design and integration Skills:- MERN Stack, Redis, MongoDB, Docker, React.js, Kubernetes, Express and NodeJS (Node.js)
Posted 2 days ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
About TripleLift We're TripleLift, an advertising platform on a mission to elevate digital advertising through beautiful creative, quality publishers, actionable data and smart targeting. Through over 1 trillion monthly ad transactions, we help publishers and platforms monetize their businesses. Our technology is where the world's leading brands find audiences across online video, connected television, display and native ads. Brand and enterprise customers choose us because of our innovative solutions, premium formats, and supportive experts dedicated to maximizing their performance. As part of the Vista Equity Partners portfolio, we are NMSDC certified, qualify for diverse spending goals and are committed to economic inclusion. Find out how TripleLift raises up the programmatic ecosystem at triplelift.com. Role TripleLift, a fast-growing startup tackling some of the most challenging problems in the world of digital advertising, is seeking a Data Scientist with a focus on optimization strategies for our real-time ad marketplace. As a Data Scientist at TripleLift, you will contribute to experiments to understand bidding behaviors in our real-time marketplace, improving the performance of bidding algorithms, and optimizing outcomes for Publishers, DSPs and TripleLift alike. You will help build proof of concept models, put ML models in production, create reusable features and data structures, and collaborate with cross-functional teams to drive Data Science initiatives forward. Responsibilities Contribute to research and experiments to improve the performance of our bidding algorithm and maximize performance for our customers Help build new bidding and pricing optimization strategies based on cutting edge research, assist in building proof of concept ML models, and drive them to successful outcomes in full scale production Support analytics projects in partnership with Product, Engineering, and cross-functional teams to support and influence product strategies Monitor and measure statistical modeling performance, and build dashboards and alerts to ensure models are functioning effectively Build reusable modules and data structures, and provide guidance and feedback to team members on their work, taking into account their skills, backgrounds and working styles Qualifications Bachelor’s degree or higher in a related quantitative field (E.g. Mathematics, Computer Science, Engineering, Economics, or Operations Research) At least two years of work experience in data science and machine learning Familiarity with a majority of tools used in our tech stack, including Python, Spark, DataBricks, ONNX, MySQL, Snowflake, Airflow, Docker and Amazon Web Services Familiarity with ML libraries like scikit-learn to quickly analyze data and prototype models that can be used in high volume distributed systems Familiarity with monitoring and measuring statistical modeling performance via dashboards and alerts, using tools like Prometheus, Grafana, Looker, etc., to make sure models are functioning effectively Excellent technical communication skills Strong analytical and problem-solving skills Committed to a process of continuous learning and spreading subject matter expertise Technologies From Our Early Days, We’ve Always Believed In Using The Right Tools For The Right Job And Continue To Explore New Technology Options As We Grow. The Data Science Team Uses The Following Technologies At TripleLift Languages: Python, Java Frameworks: Spark, DataBricks, ONNX, Docker, Airflow Databases: MySQL, Snowflake, S3/Parquet Amazon Web Services to keep everything running Life at TripleLift At TripleLift, we’re a team of great people who like who they work with and want to make everyone around them better. This means being positive, collaborative, and compassionate. We hustle harder than the competition and are continuously innovating. Learn more about TripleLift and our culture by visiting our LinkedIn Life page. Establishing People, Culture and Community Initiatives At TripleLift, we are committed to building a culture where people feel connected, supported, and empowered to do their best work. We invest in our people and foster a workplace that encourages curiosity, celebrates shared values, and promotes meaningful connections across teams and communities. We want to ensure the best talent of every background, viewpoint, and experience has an opportunity to be hired, belong, and develop at TripleLift. Through our People, Culture, and Community initiatives, we aim to create an environment where everyone can thrive and feel a true sense of belonging. Privacy Policy Please see our Privacy Policies on our TripleLift and 1plusX websites. TripleLift does not accept unsolicited resumes from any type of recruitment search firm. Any resume submitted in the absence of a signed agreement will become the property of TripleLift and no fee shall be due.
Posted 2 days ago
1.0 years
0 Lacs
Pune, Maharashtra, India
On-site
load_list_page(event)"> Job listing Job details Job Information Date Opened 08/12/2025 Industry IT Services Job Type Full time Work Experience 1-3 years City Pune City State/Province Maharashtra Country India Zip/Postal Code 411057 About Us CCTech 's mission is to transform human life by the democratization of technology. We are a well established digital transformation company building the applications in the areas of CAD, CFD, Artificial Intelligence, Machine Learning, 3D Webapps, Augmented Reality, Digital Twin, and other enterprise applications. We have two business divisions: product and consulting. simulationHub is our flagship product and the manifestation of our vision. Currently, thousands of users use our CFD app in their upfront design process. Our consulting division, with its partners such as Autodesk Forge, AWS and Azure, is helping the world's leading engineering organizations, many of which are Fortune 500 list of companies, in achieving digital supremacy. Job Description We are seeking motivated Full stack Developers with 1+ years of experience for Full-Stack Development. The ideal candidate will work on both front-end and back-end development tasks. Responsibilities Front-End Development: Design and implement responsive, user-friendly interfaces using Modern frameworks (e.g.,React, Angular). Collaborate with designers to translate Figma/wireframes into functional code. Back-End Development:Develop robust, secure, and scalable back-end services using technologies like Node.js, Python. Create and maintain RESTful APIs for seamless data exchange between client and server. Database Management:Design & implement SQL and NoSQL (e.g. MongoDB, DynamoDB) databases. Optimize queries for performance and scalability. Cloud and Deployment:Manage cloud platforms (e.g., AWS, Azure). Use containerization tools like Docker and orchestration tools like Kubernetes for deployment. Integrate CI/CD pipelines to automate builds, testing, and deployments. Testing and Debugging:Conduct unit, integration, and end-to-end testing. Collaboration and Documentation:Work closely with product managers, designers, and other developers to define project requirements and deliverables. Document code, APIs, and technical processes for maintainability and knowledge sharing. Requirements Creating modular and reusable components Translating UI/UX reference design into code as closely as possible Creating more than basic CRUD applications Unit Testing and E2E testing automation Maintaining documentation Nice To Have Know about deploying to any hosting platform Experience working with Linux/Unix environment Experience working with Threejs Must Have HTML/CSS JavaScript Any data storage like sql, mongo, etc Fluency in English communication and comprehension. Benefits Opportunity to work with a dynamic and fast-paced IT organization. Make a real impact on the company's success by shaping a positive and engaging work culture. Work with a talented and collaborative team. Be part of a company that is passionate about making a difference through technology. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2185D0;border-color:#2185D0;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered="">
Posted 2 days ago
6.0 years
0 Lacs
India
On-site
Company Background Founded in 1992, Mi-Case is the industry leader in fully integrated offender management software solutions and provides industry expertise and consulting within Criminal Justice and Public Safety systems. Mi-Case leverages a unique combination of technical, functional and industry specialization as well as partnerships with key software vendors to deliver maximum value add projects. Job Description: We are seeking a highly skilled and motivated Software Development Engineer in Test (SDET) to lead the design and implementation of a scalable, maintainable, and reusable Playwright-based automation framework . This role will be instrumental in shaping our test automation strategy across multiple product teams, ensuring high-quality releases and accelerating delivery through robust test coverage and CI/CD integration. Responsibilities: Architect and build a modular Playwright automation framework using C# with a focus on reusability and scalability across teams. Collaborate with development and QA teams to integrate automation into the SDLC, including pipeline integration and test data management. Define and promote best practices for test automation, including naming conventions, code readability, and maintainability. Mentor QA engineers on automation techniques and Playwright usage, enabling broader team adoption while promoting a shift-left testing approach. Partner with developers to ensure applications are testable, including advocating for unique element identifiers and test hooks. Optimize test execution speed and reliability through parallel execution, smart waits, and flaky test management. Contribute to roadmap planning by identifying automation opportunities and technical debt. Lead code reviews, troubleshoot test failures, and continuously improve test reliability and performance. Mandatory Skills : 6+ years of experience in test automation, with at least 2 years using Playwright (or similar frameworks like Selenium). Strong programming skills in C# with experience designing libraries, frameworks, or SDKs for test automation Experience with CI/CD tools (e.g., Azure DevOps, GitHub Actions, Jenkins) and integrating test automation into pipelines Deep understanding of the Page Object Model and test design patterns. Ability to own the automation initiative end-to-end, proactively raise challenges, and unblock the team through your expertise Proven ability to lead framework development and influence cross-functional teams. Excellent communication and documentation skills Desired Skills : Experience with MudBlazor or similar component libraries. Familiarity with enterprise-scale QA processes and test planning. Familiarity with containerization (Docker) and cloud platforms (AWS/Azure) Experience with load/performance testing tools such as Artillery, k6, or JMeter Exposure to public safety or criminal justice software domains is a plus.
Posted 2 days ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
DevOps Engineer We are seeking for a DevOps Engineer where you will play a pivotal role in optimizing our development and deployment processes to ensure the reliability, scalability, and security of our systems. You will work closely with cross-functional teams to automate and streamline our operations and processes. The ideal candidate will have a strong background in DevOps practices and a passion for leveraging cutting-edge technologies to drive continuous improvement. Roles and Responsibilities: Design, implement, and maintain continuous integration and continuous deployment (CI/CD) pipelines using tools such as Jenkins, GitLab CI/CD, or CircleCI Implement and maintain containerization and orchestration technologies such as Docker and Kubernetes to enable scalable and resilient microservices architectures Develop and maintain monitoring, logging, and alerting solutions to ensure proactive identification and resolution of issues. Containerize applications using Docker and manage containerized environments with Kubernetes or other orchestration platforms. Identify opportunities for automation and process optimization to improve efficiency, reliability, and scalability of systems and workflows. Continuously evaluate and adopt new tools and technologies to stay current with industry trends and best practices in DevOps. Knowledge of microservices architecture patterns and best practices Requirements: Bachelor's degree in Computer Science, Engineering, or a related field 4+ years of experience in a DevOps or Site Reliability Engineering (SRE) role. Strong proficiency in scripting and programming languages such as Python, Shell, or Go. Hands-on experience with CI/CD tools like Jenkins, GitLab CI/CD. Experience with containerization and orchestration technologies (Docker, Kubernetes). Solid understanding of networking, security, and system administration concepts. Excellent communication and collaboration skills, with the ability to work effectively in a fast-paced environment
Posted 2 days ago
4.0 - 5.0 years
0 Lacs
Raipur, Chhattisgarh, India
On-site
Location: Raipur As a HR Business Partner, you will support day-to-day HR operations, assist in employee engagement, and help implement HR initiatives aligned with business goals. This is an excellent opportunity for an early-career HR professional (4 to 5 year’s experience)to grow within a vibrant organization. Key Responsibilities: Assist in onboarding of new employees. Support employee engagement activities and communication. Maintain HR records and employee database updates. Support performance management processes. Address employee queries and maintain HR policies compliance. Collaborate with HR team for payroll, attendance, and other HR operations. Contribute to building a positive, inclusive work culture. Requirements: Master’s degree in Human Resources, Business Administration, or related field. Strong communication and interpersonal skills. Enthusiastic, proactive, and eager to learn. Basic knowledge of HR practices and labor laws. Ability to work collaboratively in a team. Prior internship or exposure to HR functions is a plus but not mandatory. About VRIZE INC VRIZE is a Global Digital & Data Engineering company, committed to delivering end-to-end Digital solutions and services to its customers worldwide. We offer business-friendly solutions across industry verticals that include Banking, Financial Services, Healthcare & Insurance, Manufacturing, and Retail. The company has strategic business alliances with industry leaders such as Adobe, IBM Sterling Commerce, IBM, Microsoft, Docker, Sisense, Competera, Snowflake, and Tableau. VRIZE is headquartered out of Tampa (Florida) with a team size of 410 employees globally, currently, 100% of the clients undertaken are in the United States. Delivery centers are distributed in the US, Canada, Serbia, and India. Having stellar growth and future projections of 100% YOY for the last 3 years, the company has been successfully addressing its clients’ digital disruption needs. Our continued success depends to a large extent on our ability to remain at the forefront of disruptive developments in the field of information technology and leaders/team members joining the force are expected to replicate the same. VRIZE is an equal-opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, marital status, age, national origin, ancestry, disability, medical condition, pregnancy, genetic information, gender, sexual orientation, gender identity or expression, veteran status, or any other status protected under federal, state, or local law. Individuals with disabilities are provided reasonable accommodation.
Posted 2 days ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Join Walmart’s Enterprise Messaging Framework (EMF) team to build and scale near real-time telemetry pipelines across thousands of physical locations. We’re seeking a Backend Developer with strong Java + IoT expertise to design high-performance, API-driven systems and integrate edge devices with cloud platforms. 📍 Location: Bangalore (Hybrid) | Experience: 5+ years Key Skills: Java 17+, Spring Framework API Development (XML/JSON) IoT Protocols: Modbus, BACnet Wireshark / packet-level debugging Docker, Kubernetes Azure (preferred) / GCP Observability & monitoring tools You will: Develop scalable backend services for enterprise IoT Work on edge-to-cloud integrations and real-time data processing Troubleshoot low-level communication issues Collaborate in an Agile, distributed team Mentor junior developers
Posted 2 days ago
2.0 years
0 Lacs
India
Remote
🌐 Industrial Simulation Engineer (Python/C++ for Industrial AI) Verticals: Automotive / Construction / Railway / Naval / Aviation / Aerospace 📍 Remote | Minimum of 2 years experience | Full-Time 📅 Applications close: August 30, 2025 About the Role Neodustria is hiring Simulation Engineers to build the computational backbone of our AI-native engineering platform. You will develop, implement, and deploy sophisticated physics-based and machine-learning simulations that solve critical challenges in heavy industries. This is a hands-on coding role for builders. We are seeking engineers who write Python or C++ to create and automate simulations, not operators who primarily use GUI-based software like ANSYS Workbench, Abaqus, or CATIA. You will transform complex industry requirements into robust, scalable code that serves as the intelligence engine for your vertical (e.g., Automotive, Construction, Railway). You’ll be embedded in a Vertical Cell , working closely with our AI, Platform, and Sales teams to build the future of industrial simulation from the ground up. What You’ll Do (Responsibilities) Develop & Implement Simulations: Write clean, high-performance Python and/or C++ code to create physics-based (FEA, CFD, Thermodynamics) and data-driven (ML/DL/RL) simulation models. Integrate Solvers: Automate and integrate open-source solvers (e.g., OpenFOAM, Code_Aster, EnergyPlus ) into our cloud platform using APIs and scripting. Build ML Models: Design, train, and deploy machine learning models (e.g., Graph Neural Networks, Transformers, surrogate models) to predict engineering outcomes, replacing or augmenting traditional solvers. Translate Theory to Code: Convert engineering standards (ISO, Eurocodes, etc.) and first-principle physics into validated computational models. Own Simulation Scenarios: Take full ownership of implementing and validating simulation scenarios, like the ones detailed in our project files, from initial concept to production deployment. Collaborate on Data Pipelines: Work with the AI Cell to structure and process simulation data (inputs, outputs, mesh data) for training next-generation AI models. Core Qualifications (Must-Haves) ✅ 2+ years of professional experience with a strong focus on Python or C++ for scientific or engineering applications. ✅ A strong portfolio or track record of building simulation tools, custom solvers, or complex ML models from scratch. ✅ Solid foundation in numerical methods, linear algebra, and data structures. ✅ Proven experience in at least one of the following domains: Physics-Based Simulation: Programmatic experience with FEA, CFD, or thermodynamics (e.g., building pre/post-processors, scripting solvers, developing custom physics modules). Machine Learning: Experience applying ML to scientific/engineering problems (e.g., surrogate modeling, physics-informed neural networks, reinforcement learning for design optimization). Preferred Qualifications (Nice-to-Haves) 👍 Experience scripting or contributing to open-source solvers like OpenFOAM, Code_Aster, CalculiX, EnergyPlus, OpenSeesPy, or Project Chrono . 👍 Proficiency with scientific computing libraries ( NumPy, SciPy, Pandas ) and ML frameworks ( PyTorch, TensorFlow, JAX, Scikit-learn ). 👍 Experience with programmatic manipulation of 3D geometry (e.g., using Trimesh, IfcOpenShell, OpenCASCADE, PyVista ). 👍 Familiarity with MLOps, containerization (Docker), and running high-performance computing (HPC) workloads in the cloud. 👍 Deep domain knowledge in Automotive, Construction, Railway, Naval, Aviation, or Aerospace . Our Tech Stack & Tools Core Languages: Python, C++ Libraries & Frameworks: NumPy, SciPy, Pandas, PyTorch, JAX, FastAPI Solvers & Simulators: OpenFOAM, Code_Aster, EnergyPlus, and custom-built models 3D Data: STEP, IFC, STL, glTF Collaboration: Linear, Gemini, Google Workspace, Github What You’ll Influence Shape the core simulation roadmap for your entire industry at Neodustria. Define how our AI models understand, process, and simulate real-world physics. Contribute to the go-to-market strategy by building features that deliver unique value. Be the core "neuron" of your industry in a pioneering neural company. Compensation & Benefits 💰 Experience-based salary 🎯 Performance-based bonus 🌍 Work remotely in a fully international, AI-first company 🚀 Join a pioneering neural organization model with real strategic ownership 📚 Access to training and conferences in your industry or tech sector 🧠 Long-term career growth in a company at the frontier of industry and AI
Posted 2 days ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About Velotio: Velotio Technologies is a product engineering company working with innovative startups and enterprises. We are a certified Great Place to Work® and recognized as one of the best companies to work for in India. We have provided full-stack product development for 110+ startups across the globe building products in the cloud-native, data engineering, B2B SaaS, IoT & Machine Learning space. Our team of 400+ elite software engineers solves hard technical problems while transforming customer ideas into successful products. We are looking for a highly skilled Senior Software Engineer (Gen AI) with strong expertise in designing, developing, and deploying Generative AI agents on AWS Cloud, focusing on high-scale, production-ready solutions. The ideal candidate should have hands-on experience with AI frameworks, cloud architecture, and application development, and must hold a valid AWS certification. This role involves working with cross-functional teams to build robust AI systems, mentoring development teams, and effectively communicating complex technical concepts to stakeholders. Requirements Agent Development & Deployment Design and execute end-to-end deployment strategies for Generative AI agents on AWS Cloud infrastructure Architect high-scale, production-ready AI solutions capable of handling enterprise-level workloads Implement and optimize various AI frameworks (LangChain, CrewAI, AutoGen, Llamaindex) for specific use cases Develop sophisticated prompt engineering techniques and fine-tuning strategies for optimal AI performance High-Scale Architecture & Development Design and implement scalable, fault-tolerant architectures for AI applications handling millions of requests Build full-stack applications using modern development practices and cloud-native technologies Create robust microservices architectures with proper load balancing, auto-scaling, and disaster recovery Implement event-driven architectures and serverless patterns for optimal performance and cost efficiency Technical Leadership & Implementation Develop comprehensive REST APIs for AI agent interactions with proper authentication, rate limiting, and monitoring Design and optimize database architectures for AI workloads, including vector databases and traditional RDBMS Implement Infrastructure as Code using Terraform for consistent and repeatable deployments Create automated CI/CD pipelines for seamless AI model deployment and updates Communication & Mentoring Present complex technical concepts and AI solutions to executive stakeholders and technical teams Lead technical workshops and training sessions on GenAI best practices and implementation strategies Mentor development teams on AI integration, cloud architecture, and modern development practices Collaborate with cross-functional teams to align technical solutions with business objectives Qualifications 5+ years of experience in cloud architecture and application development with focus on scalable systems 1+ years of hands-on experience with Generative AI, LLMs, and AI agent frameworks Expert-level proficiency in both Python and Node.js with demonstrated production experience Strong understanding of REST API design, implementation, and best practices Comprehensive knowledge of database concepts including SQL, NoSQL, and vector databases Beginner to intermediate proficiency in Terraform for infrastructure automation Excellent communication skills with proven ability to present to technical and non-technical audiences Experience mentoring development teams and providing technical guidance Technical Expertise AI Frameworks: Hands-on experience with LangChain, CrewAI, AutoGen, Llamaindex, and similar frameworks Prompt Engineering: Advanced understanding of prompt techniques, few-shot learning, and chain-of-thought reasoning AWS Services: Deep knowledge of AWS services including SageMaker, Bedrock, Lambda, ECS, RDS, and CloudFormation High-Scale Architecture: Experience designing systems that handle high throughput and concurrent users Database Systems: Proficiency with PostgreSQL, MongoDB, Redis, and vector databases Development Tools: Experience with Docker, Kubernetes, Git, and modern development frameworks Desired Skills & Experience: Domain Knowledge: Experience in Telco or Banking industries is highly valued AWS Certifications: Machine Learning Specialty certification Advanced Terraform: Experience with complex infrastructure automation and multi-environment deployments Security: Understanding of enterprise security patterns and compliance requirements Performance Optimization: Experience with system performance tuning and cost optimization Benefits Our Culture : We have an autonomous and empowered work culture encouraging individuals to take ownership and grow quickly Flat hierarchy with fast decision making and a startup-oriented "get things done" culture A strong, fun & positive environment with regular celebrations of our success. We pride ourselves in creating an inclusive, diverse & authentic environment At Velotio, we embrace diversity. Inclusion is a priority for us, and we are eager to foster an environment where everyone feels valued. We welcome applications regardless of ethnicity or cultural background, age, gender, nationality, religion, disability or sexual orientation.
Posted 2 days ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Summary We are seeking an experienced Machine Learning Engineer to design, build, and deploy production-grade models for demand forecasting, customer churn prediction, and inventory optimization. You'll work with large-scale transactional data (e.g., orders, customer behavior) to create robust systems that predict rental demand, identify at-risk customers, and manage inventory efficiently, including handling returns and refurbishments. This role is ideal for someone passionate about e-commerce/retail analytics and proficient in Python-based ML workflows. Key Responsibilities Demand Prediction: Develop and implement time-series forecasting models (e.g., using Prophet, ARIMA, or LSTM) to predict rental demand by product (SKU), category, and city. Incorporate features like seasonality, holidays, promotions, and external factors (e.g., weather, economic indicators) to achieve high accuracy. Churn Prediction: Build classification models (e.g., XGBoost, Random Forests) to predict customer churn based on subscription history, order patterns, and behavioral features. Use outputs to inform retention strategies and integrate with inventory models (e.g., estimating returns from churned users). Inventory Management: Design optimization models (e.g., using PuLP or linear programming) to manage stock levels, reorder points, and refurbishment cycles, leveraging demand and churn forecasts to minimize stockouts and overstock costs. End-to-End ML Pipeline: Create data pipelines (ETL) for ingesting and preprocessing order data (e.g., from CSV sources with timestamps, SKUs, cities). Feature engineering: Generate 50-100+ features like lagged orders, customer tenure, day-of-week effects, and holiday flags. Model Deployment & Monitoring: Deploy models as APIs (e.g., using FastAPI, Docker, Kubernetes) for real-time predictions. Implement monitoring for model drift and retraining workflows. Conduct A/B testing and evaluate models using metrics like RMSE (for demand), AUC-ROC (for churn), and cost savings (for inventory). Scalability & Experimentation: Optimize models for large datasets (e.g., millions of orders) using cloud platforms (AWS/GCP). Experiment with advanced techniques like reinforcement learning for dynamic pricing tie-ins. Required Qualifications Education: Bachelor's or Master's degree in Computer Science, Data Science, Engineering, or a related field. Experience: 3-5+ years as an ML Engineer or similar role, with hands-on experience in retail/e-commerce analytics (e.g., demand forecasting, churn, inventory). Technical Skills: Proficiency in Python (pandas, NumPy, scikit-learn) and ML libraries (Prophet, XGBoost, TensorFlow/PyTorch). Time-series forecasting (ARIMA, Prophet) and optimization tools (PuLP, SciPy). Data pipelines (Airflow, Spark) and deployment (Docker, Kubernetes, AWS SageMaker). SQL for data querying and cloud computing (AWS/GCP/Azure). Soft Skills: Strong problem-solving, ability to work in a small team, and experience with Agile/Scrum methodologies. Domain Knowledge: Familiarity with subscription/rental models (e.g., handling returns, refurbishments) in e-commerce. Preferred Qualifications Experience with reinforcement learning or advanced optimization for dynamic pricing. Knowledge of big data tools (e.g., Hadoop, Spark) for scaling models. Publications or projects in retail predictive analytics Familiarity with RentoMojo-like platforms or the Indian e-commerce market. Skills: data tools,python,demand forecasting,inventory optimization,model deployment,sql,data pipelines
Posted 2 days ago
5.0 years
0 Lacs
India
Remote
Job Role: Senior AI Developer Job Type: Full-Time Work Mode: Remote We’re seeking a Senior AI Developer with a strong background in building and deploying AI/ML-powered applications in production environments. This role is ideal for someone passionate about working with Large Language Models, Conversational AI, Agentic AI , and cutting-edge AI infrastructure. Core Requirements 5+ years of software development experience, with 2–3 years in AI/ML applications. Expert-level Python skills and deep knowledge of the AI/data science ecosystem. LLM expertise: Prompt engineering, RAG pipelines, and model fine-tuning. Agentic AI: Designing multi-step autonomous agents capable of reasoning, tool use, and adaptive decision-making. Conversational AI: NLU, dialogue management, and integration with STT/TTS APIs (e.g., OpenAI, Azure Speech, ElevenLabs). Solid grasp of system design, API architecture, and building scalable, low-latency systems. Preferred Skills Hands-on experience with LangChain or LangGraph for complex agent workflows. Experience with orchestration frameworks for autonomous agents and multi-agent collaboration. Familiarity with cloud infrastructure (AWS), Docker/Kubernetes, and CI/CD. Experience with vector databases (Pinecone, Qdrant, Milvus, etc.). Bonus: Basic frontend development (React/Vue) and real-time voice streaming or STT model deployment (e.g., Whisper). Key Responsibilities Design, build, and deploy autonomous AI agents capable of reasoning, planning, and executing multi-step tasks. Integrate Agentic AI workflows into production systems to automate complex processes and enhance decision-making. Implement tool-using agents that interact with APIs, databases, and external services in real time. Develop multi-agent collaboration systems for distributed problem-solving and task delegation. Build adaptive learning systems where agents improve performance based on feedback and historical data. Collaborate with product teams to identify business problems that can be solved using Agentic AI and LLM-based solutions. Optimize AI pipelines for performance, cost efficiency, and scalability.
Posted 2 days ago
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Experience : 2.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Office (Gurugram) Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Trademo) What do you need for this opportunity? Must have skills required: Python, Django, MongoDB, PostgreSQL Trademo is Looking for: About Trademo Trademo is a Global Supply Chain Intelligence SaaS Company, headquartered in Palo-Alto, US. Trademo collects public and private data on global trade transactions, sanctioned parties, trade tariffs, ESG and other events using its proprietary algorithms.Trademo analyzes and performs advanced data processing on billions of data points (50Tb+) using technologies like Graph Databases, Vector Databases, ElasticSearch, MongoDB, NLP and Machine Learning (LLMs) to build end-to-end visibility on Global Supply Chains. Trademo’s vision is to build a single truth on global supply chains to different stakeholders in global supply chains - discover new commerce opportunities, ensure compliance with trade regulations, and automation for border security. Trademo stands out as one of the rarest Indian SaaS startups to secure 12.5 mn in seed funding. Founded by Shalabh Singhal, who is a third-time tech entrepreneur and an alumni of IIT BHU, CFA Institute USA, and Stanford GSB SEED. Our Trademo is backed by a remarkable team of leaders and entrepreneurs like Amit Singhal (Former Head of Search at Google), Sridhar Ramaswamy (CEO, Snowflake), Neeraj Arora (MD, General Catalyst & Former CBO, Whatsapp Group). —---------------------------------------------------------------------------------------- Role: SDE 2 - Backend Website: www.trademo.com Location: Onsite - Gurgaon What will you be doing here? Design, implement, and maintain scalable features across the stack using Django/Python (backend). Build, deploy, and monitor services on Azure, GCP, and IBM Cloud, with an emphasis on scalability, performance, and security. Design data models and write efficient queries using PostgreSQL, MongoDB, and SQL, ensure data integrity and performance. Build and maintain RESTful APIs and internal services to support platform integrations and data workflows. Contribute to CI/CD pipelines, infrastructure-as-code, and monitoring solutions for seamless deployment and observability. Debug complex issues and optimize backend logic for better user experience and system reliability. Collaborate with teams and share knowledge via tech talks and promote tech and engineering best practices within the team. Requirement B-Tech/M-Tech in Computer Science from Tier 1/2 Colleges. Basic understanding of working within Cloud infrastructure and Cloud Native Apps (AWS, Azure, etc.). 3+ years of hands-on experience in software engineering, preferably in product-based or SaaS companies. Deep experience with Python and Django for backend development. Solid understanding of SQL, PostgreSQL, and MongoDB including query optimization and schema design. Sound understanding of RESTful APIs, microservices, containerization (Docker), and version control (Git). Familiarity with CI/CD practices, testing frameworks, and agile methodologies. Desired Profile: A hard-working, humble disposition. Desire to make a strong impact on the lives of millions through your work. Capacity to communicate well with stakeholders as well as team members and be an effective interface between the Engineering and Product/Business team. A quick thinker who can adapt to a fast-paced startup environment and work with minimum supervision What we offer: At Trademo, we want our employees to be comfortable with their benefits so they focus on doing the work they love. Parental leave - Maternity and Paternity Health Insurance Flexible Time Offs Stock Options How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 days ago
4.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Title: Python Developer – Backend Engineering Company: Darwix AI Location: Gurgaon (On-site) Type: Full-Time Experience Required: 1–4 Years 🧠 About Darwix AI Darwix AI is one of India’s fastest-growing enterprise AI startups. We’re building a GenAI-powered conversational intelligence and real-time agent assist suite for omnichannel sales teams. Our platform powers live customer conversations for leading enterprises across India, MENA, and Southeast Asia. Backed by top-tier VCs and driven by a world-class team from IITs, IIMs, and BITS, Darwix AI is reshaping how revenue teams scale with data, automation, and real-time intelligence. 🎯 Role Overview We are hiring a Python Developer to join our backend engineering team and contribute to building scalable APIs, robust data flows, and integrations with AI services. You’ll work across backend services, databases, and cloud environments to support the rapid development and deployment of our GenAI solutions. This role is ideal for developers who are confident with Python and backend fundamentals, and are excited to grow within a high-performance team solving real-world problems with AI. 🔧 Key ResponsibilitiesBackend & API Development Build and maintain RESTful APIs using FastAPI , Flask , or Django Design and implement backend modules to support features like agent assist, real-time transcription, and AI recommendations Integrate Python services with frontend and mobile clients via secure APIs Database & Data Operations Work with MySQL and PostgreSQL for structured data storage, querying, and optimization Build and maintain ETL scripts and backend logic to support AI model inputs and outputs Set up background workers and cron jobs for processing sales calls, analytics events, and reports Integration with AI Systems Connect backend systems with transcription engines (Whisper, Deepgram), vector DBs, and LLM APIs (OpenAI, Gemini) Implement API wrappers, prompt routing logic, and GenAI orchestration services DevOps & Deployment Use Git , GitHub , and CI/CD pipelines for version control and automated deployments Assist in maintaining cloud-hosted services (AWS EC2 preferred) Debug and monitor deployed services for performance, reliability, and scaling ✅ Required Skills & Qualifications 1–4 years of experience working as a Python developer in backend roles Proficiency in Python with experience using FastAPI , Flask , or Django Strong understanding of RESTful API design and integration practices Experience with MySQL or PostgreSQL ; ability to write clean and efficient queries Hands-on experience with Git, GitHub, and basic server-side deployment workflows Familiarity with cloud platforms like AWS , DigitalOcean , or GCP Strong debugging, documentation, and code-structuring habits 💡 Good to Have (Not Mandatory) Exposure to AI/ML concepts , LLMs, or vector DBs like FAISS or Pinecone Experience working with audio data , transcription, or speech-to-text pipelines Familiarity with Docker, containerized deployments, and Linux CLI Interest in real-time systems, API scalability, or prompt engineering 🌟 You’ll Excel In This Role If You Enjoy working in fast-paced, high-ownership environments Can take product ideas from concept to deployment independently Want to work on cutting-edge AI products with real-world impact Are hungry to learn and grow fast in a high-caliber tech team 💼 What We Offer Competitive salary with performance-linked incentives ESOPs for long-term contributors An opportunity to work on core systems that power AI at scale High visibility within the company and direct access to founders A merit-driven, collaborative, and fast-moving work environment 📩 How to Apply Send your resume + GitHub/portfolio to people@darwix.ai Subject Line: Python Developer – [Your Name] Darwix AI Redefining Sales with Generative AI | From India for the World
Posted 2 days ago
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job description 🚀 Job Title: AI Engineer Company : Darwix AI Location : Gurgaon (On-site) Type : Full-Time Experience : 2-6 Years Level : Senior Level 🌐 About Darwix AI Darwix AI is one of India’s fastest-growing GenAI startups, revolutionizing the future of enterprise sales and customer engagement with real-time conversational intelligence. We are building a GenAI-powered agent-assist and pitch intelligence suite that captures, analyzes, and enhances every customer interaction—across voice, video, and chat—in real time. We serve leading enterprise clients across India, the UAE, and Southeast Asia and are backed by global VCs, top operators from Google, Salesforce, and McKinsey, and CXOs from the industry. This is your opportunity to join a high-caliber founding tech team solving frontier problems in real-time voice AI, multilingual transcription, retrieval-augmented generation (RAG), and fine-tuned LLMs at scale. 🧠 Role Overview As the AI Engineer , you will drive the development, deployment, and optimization of AI systems that power Darwix AI's real-time conversation intelligence platform. This includes voice-to-text transcription, speaker diarization, GenAI summarization, prompt engineering, knowledge retrieval, and real-time nudge delivery. You will lead a team of AI engineers and work closely with product managers, software architects, and data teams to ensure technical excellence, scalable architecture, and rapid iteration cycles. This is a high-ownership, hands-on leadership role where you will code, architect, and lead simultaneously. 🔧 Key Responsibilities 1. AI Architecture & Model Development Architect end-to-end AI pipelines for transcription, real-time inference, LLM integration, and vector-based retrieval. Build, fine-tune, and deploy STT models (Whisper, Wav2Vec2.0) and diarization systems for speaker separation. Implement GenAI pipelines using OpenAI, Gemini, LLaMA, Mistral, and other LLM APIs or open-source models. 2. Real-Time Voice AI System Development Design low-latency pipelines for capturing and processing audio in real-time across multi-lingual environments. Work on WebSocket-based bi-directional audio streaming, chunked inference, and result caching. Develop asynchronous, event-driven architectures for voice processing and decision-making. 3. RAG & Knowledge Graph Pipelines Create retrieval-augmented generation (RAG) systems that pull from structured and unstructured knowledge bases. Build vector DB architectures (e.g., FAISS, Pinecone, Weaviate) and connect to LangChain/LlamaIndex workflows. Own chunking, indexing, and embedding strategies (OpenAI, Cohere, Hugging Face embeddings). 4. Fine-Tuning & Prompt Engineering Fine-tune LLMs and foundational models using RLHF, SFT, PEFT (e.g., LoRA) as needed. Optimize prompts for summarization, categorization, tone analysis, objection handling, etc. Perform few-shot and zero-shot evaluations for quality benchmarking. 5. Pipeline Optimization & MLOps Ensure high availability and robustness of AI pipelines using CI/CD tools, Docker, Kubernetes, and GitHub Actions. Work with data engineering to streamline data ingestion, labeling, augmentation, and evaluation. Build internal tools to benchmark latency, accuracy, and relevance for production-grade AI features. 6. Team Leadership & Cross-Functional Collaboration Lead, mentor, and grow a high-performing AI engineering team. Collaborate with backend, frontend, and product teams to build scalable production systems. Participate in architectural and design decisions across AI, backend, and data workflows. 🛠️ Key Technologies & Tools Languages & Frameworks : Python, FastAPI, Flask, LangChain, PyTorch, TensorFlow, HuggingFace Transformers Voice & Audio : Whisper, Wav2Vec2.0, DeepSpeech, pyannote.audio, AssemblyAI, Kaldi, Mozilla TTS Vector DBs & RAG : FAISS, Pinecone, Weaviate, ChromaDB, LlamaIndex, LangGraph LLMs & GenAI APIs : OpenAI GPT-4/3.5, Gemini, Claude, Mistral, Meta LLaMA 2/3 DevOps & Deployment : Docker, GitHub Actions, CI/CD, Redis, Kafka, Kubernetes, AWS (EC2, Lambda, S3) Databases : MongoDB, Postgres, MySQL, Pinecone, TimescaleDB Monitoring & Logging : Prometheus, Grafana, Sentry, Elastic Stack (ELK) 🎯 Requirements & Qualifications 👨💻 Experience 2-6 years of experience in building and deploying AI/ML systems, with at least 2+ years in NLP or voice technologies. Proven track record of production deployment of ASR, STT, NLP, or GenAI models. Hands-on experience building systems involving vector databases, real-time pipelines, or LLM integrations. 📚 Educational Background Bachelor's or Master's in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Tier 1 institute preferred (IITs, BITS, IIITs, NITs, or global top 100 universities). ⚙️ Technical Skills Strong coding experience in Python and familiarity with FastAPI/Django. Understanding of distributed architectures, memory management, and latency optimization. Familiarity with transformer-based model architectures, training techniques, and data pipeline design. 💡 Bonus Experience Worked on multilingual speech recognition and translation. Experience deploying AI models on edge devices or browsers. Built or contributed to open-source ML/NLP projects. Published papers or patents in voice, NLP, or deep learning domains. 🚀 What Success Looks Like in 6 Months Lead the deployment of a real-time STT + diarization system for at least 1 enterprise client. Deliver high-accuracy nudge generation pipeline using RAG and summarization models. Build an in-house knowledge indexing + vector DB framework integrated into the product. Mentor 2–3 AI engineers and own execution across multiple modules. Achieve <1 sec latency on real-time voice-to-nudge pipeline from capture to recommendation. 💼 What We Offer Compensation : Competitive fixed salary + equity + performance-based bonuses Impact : Ownership of key AI modules powering thousands of live enterprise conversations Learning : Access to high-compute GPUs, API credits, research tools, and conference sponsorships Culture : High-trust, outcome-first environment that celebrates execution and learning Mentorship : Work directly with founders, ex-Microsoft, IIT-IIM-BITS alums, and top AI engineers Scale : Opportunity to scale an AI product from 10 clients to 100+ globally within 12 months ⚠️ This Role is NOT for Everyone 🚫 If you're looking for a slow, abstract research role—this is NOT for you. 🚫 If you're used to months of ideation before shipping—you won't enjoy our speed. 🚫 If you're not comfortable being hands-on and diving into scrappy builds—you may struggle. ✅ But if you’re a builder , architect , and visionary —who loves solving hard technical problems and delivering real-time AI at scale, we want to talk to you. 📩 How to Apply Send your CV, GitHub/portfolio, and a brief note on “Why AI at Darwix?” to: 📧 careers@cur8.in Subject Line: Application – AI Engineer – [Your Name] Include links to: Any relevant open-source contributions LLM/STT models you've fine-tuned or deployed RAG pipelines you've worked on 🔍 Final Thought This is not just a job. This is your opportunity to build the world’s most scalable AI sales intelligence platform —from India, for the world.
Posted 2 days ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Summary System Vulnerability Analysts analyze vulnerabilities (CVE database) and understand the mitigations to strengthen defenses and understand how the vulnerability works and how to exploit them. Recreate scenarios for the attacks based on Proof of Concepts and analysis against attacks against network infrastructure devices or systems (L4 7 Layer). Defend analysis against attacks against network infrastructure devices or systems (L4 Layer). Experience with traditional wired networks, wireless transport on various protocols. Expert in networking protocols and architectures with advance traditional network security handles In depth understanding of Cyber and IT security risks, threats, and prevention measures. In depth understanding of networking and network security (network monitoring, protocols, PCAP). Network security devices (firewalls, proxies, NIDS/NIPS, etc.) Adversary techniques, tactics, and protocols and related countermeasures. Dynamic and static malware analysis techniques. OS (Linux is required) and application installation and configurations. Experience in one or more: container (docker) and/or hypervisor (VMware or other). Networking basics: to interconnect containers/VMs with dev and test VMs. Experience in tools like Metasploit, TShark, Wireshark. Experience in Ruby scripting, GIT repository, off the shelf products, and tools. Priority will be given if a dev has a security specific certification (OSCP, CRTO, HTB CPTS, etc.) Python automation skills are optional. The CVE (or other) will be available from the Keysight‿s StrikeQ tool. CISA KEVs and OT/SCADA CVEs are the current priority, but are subject to change and expand.
Posted 2 days ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Summary We are seeking a highly skilled Full Stack Engineer to join our AML (Anti-Money Laundering) technology team. The ideal candidate will have hands-on experience in developing scalable, secure web applications and microservices, integrating with third-party APIs, and deploying in both cloud and on-premise environments. The role involves collaboration with cross-functional teams, focusing on compliance solutions and system integration. Mandatory Skills Core Java (Java SE, OOPs, Exception Handling) Spring Boot & Spring Security REST & SOAP Web Services API authentication (JWT, OAuth2, API Keys) JPA/Hibernate CI/CD (Jenkins, GitHub Actions) Docker, Kubernetes Cloud platforms (AWS, Azure, or GCP) Key Responsibilities Design and implement secure, scalable RESTful and SOAP web services. Develop, integrate, and maintain applications with third-party and legacy systems. Implement API key and token-based authentication. Collaborate with DevOps teams for CI/CD and system monitoring. Conduct code reviews and participate in architectural discussions. Ensure application performance, security, and reliability through testing and optimization. Adhere to SDLC, change management, and compliance standards. Qualifications Bachelor's degree in Information Technology or Computer Science. Relevant certifications (e.g., Java, AWS, or DevOps) are a plus. Technical Skills Languages: Java, SQL Frameworks: Spring Boot, Spring Security, JAX-RS, JAX-WS Databases: MSSQL, MySQL, PostgreSQL Tools: IntelliJ IDEA, Eclipse, VS Code, Postman, curl, JUnit, TestNG DevOps: Git, GitHub/GitLab, Maven/Gradle, Swagger/OpenAPI, SoapUI Middleware: JBoss EAP v8 Soft Skills Strong analytical and troubleshooting skills Excellent communication and teamwork Attention to detail and a proactive mindset Ability to work under pressure and manage priorities Good To Have Knowledge of AML domain Exposure to security standards and compliance in finance/banking Familiarity with logging frameworks (SLF4J, Logback) Work Experience Minimum 5 years of development experience in enterprise environments. Experience with microservices and secure enterprise API integrations. Compensation & Benefits Competitive salary and annual performance-based bonuses Comprehensive health and optional Parental insurance. Optional retirement savings plans and tax savings plans. Key Result Areas (KRA) Design and delivery of scalable, secure full stack applications. Effective integration with external and legacy systems. Adherence to coding, testing, and documentation standards. Efficient CI/CD and deployment practices. High-quality stakeholder and team collaboration. Key Performance Indicators (KPI) Code quality and review score (% adherence to standards) Deployment success rate and frequency Mean Time to Resolve (MTTR) critical issues Test coverage percentage Timely delivery of assigned project milestones Uptime and performance metrics of deployed services Contact: hr@bigtappanalytics.com
Posted 2 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
40175 Jobs | Dublin
Wipro
19626 Jobs | Bengaluru
Accenture in India
17497 Jobs | Dublin 2
EY
16057 Jobs | London
Uplers
11768 Jobs | Ahmedabad
Amazon
10704 Jobs | Seattle,WA
Oracle
9513 Jobs | Redwood City
IBM
9439 Jobs | Armonk
Bajaj Finserv
9311 Jobs |
Accenture services Pvt Ltd
8745 Jobs |