Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Medicine moves too slow. At Velsera, we are changing that. Velsera was formed in 2023 through the shared vision of Seven Bridges and Pierian, with a mission to accelerate the discovery, development, and delivery of life-changing insights. Velsera provides software and professional services for: AI-powered multimodal data harmonization and analytics for drug discovery and development IVD development, validation, and regulatory approval Clinical NGS interpretation, reporting, and adoption With our headquarters in Boston, MA, we are growing and expanding our teams located in different countries! What will you do? Work very closely with DevOps Team Leads and Architects to contribute to automation of product deployments, operational processes and procedures. Work with industry standard automation and configuration management tools Regularly solicit feedback from team members and across departments Owns smaller features, from technical design to delivery Evaluate tools, processes and practices Handle critical incidents by shadowing senior team members Provide support and contribute to troubleshooting, remediation of production incidents and events related to all onboarded applications, AWS infrastructure, CI/CD tools and processes Contribute to cost optimization initiatives for infrastructure deployed on Cloud platforms Enable visibility for the platform, application, infrastructure health by implementing the right monitoring strategy. What do you bring to the table? Very good knowledge of Linux/Unix system administration and internals Good knowledge of Bash Familiarity with Python/Go Have working knowledge of Network design and implementation Have experience maintaining an infrastructure on top of major cloud providers (AWS, GCP, Azure) Have working experience with IaaC tools, preferably CloudFormation Template, Terraform, Ansible Familiarity with monitoring tools, preferably Prometheus, CloudWatch, Grafana Experience in container management solutions like AWS Elastic Kubernetes Service, Amazon Elastic Container Service etc Experience working with NoSQl/SQL databases Experience with ELK stack Familiarity with orchestration Familiarity with Microservice architecture Experience in configuration and setup of automated CI/CD pipelines via tools like Jenkins, AWS CI/CD etc 4 - 5 years of experience as a DevOps/Systems/Software Engineer Bachelor's or master's degree in computer science or Equivalent Fluent English language skills, written and verbal Show more Show less
Posted 1 day ago
6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
π Weβre Hiring | Full Stack Developer (Java/Kotlin/Angular/Kafka) - full time π Location: Onsite π Working Hours: Until US EST Noon π Experience: 4β6 Years π« Notice period : 0 to 15 Days π Budget :19 LPA Are you a passionate Full Stack Developer who thrives in building scalable web applications and real-time event-driven systems? Join our dynamic engineering team to work on impactful projects using cutting-edge technologies like Java, Kotlin, Spring Boot, Angular, and Apache Kafka. π§ Key Responsibilities: Develop robust web applications with Java, Kotlin, Spring Boot, and Angular (v2+). Build & maintain RESTful APIs and microservices. Design real-time, event-driven solutions using Apache Kafka. Collaborate across UI/UX, QA, DevOps & Product teams. Participate in code reviews, Agile ceremonies, and CI/CD practices. Troubleshoot, optimize performance, and ensure security compliance. β Required Skills: 4β6 years of hands-on full stack development experience. Strong in Java, Kotlin, Spring Boot, Angular, and Kafka. Experience with cloud deployments (Azure/GCP) and microservices. Proficient with SQL/NoSQL, ORMs (Hibernate/JPA), and Git. Knowledge of Docker/Kubernetes, CI/CD pipelines, and Agile methodologies. Excellent communication and problem-solving skills. π© Apply Here!: rajesh@reveilletechnologies.com ./ Show more Show less
Posted 1 day ago
0 years
0 Lacs
Tamil Nadu, India
On-site
Summary: We are seeking a skilled Cloud Project Engineer with experience in Azure cloud migrations , database migration, IaaS deployments, AVD setups, disaster recovery (ASR), and hands-on expertise with Azure App Services . The ideal candidate should be able to deploy App Gateways, CDN, and Azure Front Door solutions and understand landing zones to design and implement end-to-end architecture. The role may require cross-skilling in other cloud services (AWS/GCP), working on M365 requirements, and international travel for project deployments. Key Responsibilities: Lead on-prem to Azure migrations and Azure Database migration projects. Deploy and manage Azure IaaS services and AVD setups. Implement Azure Site Recovery (ASR) for disaster recovery. Deploy and manage Azure App Services , App Gateway , CDN , and Azure Front Door for web applications. Understand and design landing zones and deliver full end-to-end cloud architecture solutions. Support Azure DevOps and work on M365 requirements. Utilize automation tools and scripting (PowerShell, Python). Manage Azure PaaS deployments FSLogix profile migration . Required Skills: Hands-on experience in Azure migrations and IaaS deployments . Experience with App Gateway , CDN , and Azure Front Door . Knowledge of landing zones and end-to-end Azure architecture design . Basic scripting (Python, PowerShell) and automation (Terraform, Ansible). Knowledge of Azure DevOps and cross-skill in AWS/GCP. Preferred Qualifications: Azure certifications such as Azure Administrator Associate , Azure Solutions Architect , or Azure DevOps Engineer . Familiarity with Terraform , PowerShell scripting , and Ansible for automation. Experience in cross-platform cloud integration (AWS, GCP). Knowledge of security best practices and tools in Azure. Soft Skills: Excellent communication and collaboration skills. Strong problem-solving abilities and proactive approach. Ability to work in a fast-paced and dynamic environment. Willingness to learn and adapt to new technologies. Show more Show less
Posted 1 day ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Get to know Okta Okta is The Worldβs Identity Company. We free everyone to safely use any technology, anywhere, on any device or app. Our flexible and neutral products, Okta Platform and Auth0 Platform, provide secure access, authentication, and automation, placing identity at the core of business security and growth. At Okta, we celebrate a variety of perspectives and experiences. We are not looking for someone who checks every single box - weβre looking for lifelong learners and people who can make us better with their unique experiences. Join our team! Weβre building a world where Identity belongs to you. We are looking for a Staff Software engineer who is passionate about writing the tools to integrate and support to build large-scale, high-demand software in a fast-paced agile environment. You will share our passion for test-driven development, continuous integration and automation to produce frequent high-quality releases. Our engineering team is fast, innovative and flexible; with a weekly release cycle and individual ownership. We expect great things from our engineers and reward them with stimulating new projects, emerging technologies and the chance to be part of a company that is changing the cloud computing landscape forever. You will get an opportunity to work in scaling our infrastructure to next generation. Our scale is already huge in running tens of thousands of tests for every commit automatically. This comes with challenges in speed by reducing compute time from days to few minutes. Responsibilities: Major areas of responsibility include: You will be part of the team that builds, maintains, and improves our highly-automated build, release and testing infrastructure. Scripting, tools-building, and automation are paramount to Okta Engineering; everybody automates. You will be creating and coding tools for internal use to support continuous delivery. Team up with Development, QA and OPS to continuously innovate and enhance our build and automation infrastructure Collaborate with peers and stake-holders to create new tools/process/technology. We use the latest technology from AWS and you can experiment, recommend, and implement new technologies in our build and CI system. Work with internal customers to roll-out projects and process, monitor adoption, collect feedback, and fine-tune the project to respond to internal customersβ needs REQUIRED Knowledge, Skills, and Abilities: Experience in developing Continuous Delivery pipelines for a diverse set of projects using Java, Jenkins, AWS, Docker, Python, Ruby, Bash, and more Solid understanding of CI/CD release pipelines. Exposure to cloud infrastructures, such as AWS, GCP or Azure Experience working with Gradle, Bazel, Artifactory, Docker registry, npm registry Experience with AWS, its services, and its supporting tools (cost control, reporting, environment management). Ability to coordinate cross-functional work toward task completion. Experience in Kubernetes is a plus Education and Training: B.S. in CS or equivalent Okta is an Equal Opportunity Employer. What you can look forward to as a Full-Time Okta employee! Amazing Benefits Making Social Impact Developing Talent and Fostering Connection + Community at Okta Okta cultivates a dynamic work environment, providing the best tools, technology and benefits to empower our employees to work productively in a setting that best and uniquely suits their needs. Each organization is unique in the degree of flexibility and mobility in which they work so that all employees are enabled to be their most creative and successful versions of themselves, regardless of where they live. Find your place at Okta today! https://www.okta.com/company/careers/. Some roles may require travel to one of our office locations for in-person onboarding. Okta is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, marital status, age, physical or mental disability, or status as a protected veteran. We also consider for employment qualified applicants with arrest and convictions records, consistent with applicable laws. If reasonable accommodation is needed to complete any part of the job application, interview process, or onboarding please use this Form to request an accommodation. Okta is committed to complying with applicable data privacy and security laws and regulations. For more information, please see our Privacy Policy at https://www.okta.com/privacy-policy/. Show more Show less
Posted 1 day ago
9.0 years
0 Lacs
India
Remote
Job Title: Senior Node.js Developer Experience: 9+ Years Location: Hybrid (PAN India) Employment Type: Full-time Job Summary: We are seeking a highly skilled Senior Node.js Developer with 9+ years of experience to design, build, and maintain scalable backend services. The ideal candidate will have deep expertise in Node.js, API development, microservices, and cloud platforms to drive high-performance applications. Key Responsibilities: Develop and maintain high-performance, scalable Node.js applications Design and optimize RESTful APIs & microservices Work with SQL/NoSQL databases and ensure data integrity Implement cloud-based solutions (AWS/Azure/GCP) Optimize applications for speed and scalability Collaborate with DevOps for CI/CD pipelines & deployment automation Required Skills: β 9+ years in software development with 5+ years in Node.js β Expertise in JavaScript/TypeScript β Strong knowledge of backend architecture & API design β Experience with cloud platforms (AWS/Azure/GCP) β Familiarity with Docker, Kubernetes, CI/CD tools β Understanding of security & performance best practices Nice to Have: Experience with GraphQL, Serverless Architecture Knowledge of message brokers (Kafka, RabbitMQ) Why Join Us? Work on cutting-edge backend systems Flexible remote work opportunities Collaborative & growth-oriented environment Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
India
Remote
Job Title: Senior Data Engineer Experience: 5+ Years Location: Remote Contract Duration: Short Term Work Time: IST Shift Job Description We are seeking a skilled and experienced Senior Data Engineer to develop scalable and optimized data pipelines using the Databricks Lakehouse platform. The role requires proficiency in Apache Spark, PySpark, cloud data services (AWS, Azure, GCP), and solid programming knowledge in Python and Java. The engineer will collaborate with cross-functional teams to design and deliver high-performing data solutions. Responsibilities Data Pipeline Development Build efficient ETL/ELT workflows using Databricks and Spark for batch and streaming data Utilize Delta Lake and Unity Catalog for structured data management Optimize Spark jobs using tuning techniques such as caching, partitioning, and serialization Cloud-Based Implementation Develop and deploy data workflows on AWS (S3, EMR, Glue), Azure (ADLS, ADF, Synapse), and/or GCP (GCS, Dataflow, BigQuery) Manage and optimize data storage, access control, and orchestration using native cloud tools Implement data ingestion and querying with Databricks Auto Loader and SQL Warehousing Programming and Automation Write clean, reusable, and production-grade code in Python and Java Automate workflows using orchestration tools like Airflow, ADF, or Cloud Composer Implement testing, logging, and monitoring mechanisms Collaboration and Support Work closely with data analysts, scientists, and business teams to meet data requirements Support and troubleshoot production workflows Document solutions, maintain version control, and follow Agile/Scrum methodologies Required Skills Technical Skills Databricks: Experience with notebooks, cluster management, Delta Lake, Unity Catalog, and job orchestration Spark: Proficient in transformations, joins, window functions, and tuning Programming: Strong in PySpark and Java, with data validation and error handling expertise Cloud: Experience with AWS, Azure, or GCP data services and security frameworks Tools: Familiarity with Git, CI/CD, Docker (preferred), and data monitoring tools Experience 5β8 years in data engineering or backend development Minimum 1β2 years of hands-on experience with Databricks and Spark Experience with large-scale data migration, processing, or analytics projects Certifications (Optional but Preferred) Databricks Certified Data Engineer Associate Working Conditions Full-time remote work with availability during IST hours Occasional on-site presence may be required during client visits No regular travel required On-call support expected during deployment phases Show more Show less
Posted 1 day ago
2.0 years
0 Lacs
India
Remote
About Company: Xsell Resources specializes in the placement of contract, contract to direct and permanent IT professionals that include Project/Program Managers, Business Analysts, QA/Test, Developers/Programmers and Infrastructure professionals. Our dedicated recruiters are either career IT recruiters or former IT professionals. Our recruiter organizational structure is based on specific disciplines. Every recruiter or group is organized into a niche discipline spanning project/program managers, business analysts, quality assurance, developers, DBAβs, infrastructure and telecommunication professionals and more. Our recruiters participate in discussion forums, user groups and other sources of candidate networking to further understand their discipline and engage only qualified candidates. Job Title: FinOps Consultant Location: Remote Expected work hours: 2 PM to 11:30 PM IST Note: Candidate should be comfortable to work for UK Shifts Interview Mode: Virtual (Two rounds of interviews (60 min technical + 30 min technical & cultural discussion) Note : Candidate should have atleast 2 years of experience into Healthcare Industry. Client: Xsell Resources Experience: 7+ yrs Job Type : Contract to hire . Notice Period:- Immediate joiners Only. Roles and Responsibilities: Focus on cloud cost optimization and financial management. Experience monitoring cloud spending, identifying cost-saving opportunities, collaborating with engineering and finance teams, and implementing cost-saving strategies. Expertise in cloud computing and data analysis capabilities. Experience with GCP, Tableau, Apptio, Grafana & PowerApps. Excellent soft skills. Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: Data Engineer β Databricks, Delta Live Tables, Data Pipelines Location: Bhopal / Hyderabad / Pune (On-site) Experience Required: 5+ Years Employment Type: Full-Time Job Summary: We are seeking a skilled and experienced Data Engineer with a strong background in designing and building data pipelines using Databricks and Delta Live Tables. The ideal candidate should have hands-on experience in managing large-scale data engineering workloads and building scalable, reliable data solutions in cloud environments. Key Responsibilities: Design, develop, and manage scalable and efficient data pipelines using Databricks and Delta Live Tables . Work with structured and unstructured data to enable analytics and reporting use cases. Implement data ingestion , transformation , and cleansing processes. Collaborate with Data Architects, Analysts, and Data Scientists to ensure data quality and integrity. Monitor data pipelines and troubleshoot issues to ensure high availability and performance. Optimize queries and data flows to reduce costs and increase efficiency. Ensure best practices in data security, governance, and compliance. Document architecture, processes, and standards. Required Skills: Minimum 5 years of hands-on experience in data engineering . Proficient in Apache Spark , Databricks , Delta Lake , and Delta Live Tables . Strong programming skills in Python or Scala . Experience with cloud platforms such as Azure , AWS , or GCP . Proficient in SQL for data manipulation and analysis. Experience with ETL/ELT pipelines , data wrangling , and workflow orchestration tools (e.g., Airflow, ADF). Understanding of data warehousing , big data ecosystems , and data modeling concepts. Familiarity with CI/CD processes in a data engineering context. Nice to Have: Experience with real-time data processing using tools like Kafka or Kinesis. Familiarity with machine learning model deployment in data pipelines. Experience working in an Agile environment. Show more Show less
Posted 1 day ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
π Why GreedyGame? GreedyGame has been a pioneer in app growth and monetization since 2013. As a certified Google Publishing Partner for explosive growth, we serve millions of users daily. Now scaling aggressively, weβre building next-gen ad-tech products that challenge the status quo-if youβre passionate about backend systems that support massive scale, join us. π§ Role Overview As a Backend Developer on our Core Infrastructure team, youβll build and maintain high- performance backend systems APIs , microservices, and real-time data pipelines. Youβll work closely with product, data, and mobile teams, driving end-to-end delivery and ensuring our platform remains reliable, sustainable, and scalable. π οΈ What Youβll Do Build scalable microservices in Go that power our SDK, ad delivery platform, and analytics. Design and maintain real-time systems, message streams, and batching pipelines. Collaborate cross-functionally with mobile, analytics, and DevOps teams to launch features. Optimize systems for performance, maintainability, and cost-efficiency. Embed quality engineeringβunit tests, code reviews, observability, documentation. Contribute to architecture decisions and technical roadmaps for long-term impact. Mentor and coach junior engineers, sharing best practices. π― What Youβll Bring 3+ years of production experience building backend services in Golang. Proficiency in microservices, RESTful APIs design, and distributed system patterns. Hands-on experience with message queues (Kafka, RabbitMQ). Comfort with SQL/NoSQL databasesβPostgreSQL, Redis, or big-data stores. Familiarity with cloud platforms (AWS or GCP), Docker, and CI/CD pipelines. Strong system design, data structures, and algorithm skills. Desire to learn ad-tech or analytics systems is a plus. π What Makes You a Great Fit Youβre a strategic thinker who cares about long-term maintainability. You thrive in ambiguous, fast-moving environments, adapting quickly. Youβre a communicatorβable to explain complex technical ideas clearly. You see beyond code: You care about user experience, product impact, and business value. π Benefits & Perks Direct impact on products used daily by millions, check out- Pubscale Ownership from Day 1, shaping architecture and strategy. Learning stipend for books, courses, or conferences, check recent internal blogs Flexible hybrid work: work from our Bangalore office and remotely. Health and wellness support: insurance, paid time off, parental leave. A growth-minded culture, built on collaboration, inclusivity, and curiosity. βοΈ Hiring Process Intro Call β meet with Talent Acquisition to explore your background (~30 min) Technical Screen β backend-focused problem-solving and system design (~60 min) CTO Round β alignment on product vision, team culture, and technical fit (~45 min) Offer & Onboarding β aim to complete within 7 working days Skills: golang,restful apis,kafka,analytics,redis,aws,ci/cd,postgresql,rabbitmq,architecture,microservices,go (golang),gcp,distributed systems,docker Show more Show less
Posted 1 day ago
12.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
About IDfy IDfy is an Integrated Identity Platform offering products and solutions for KYC, KYB, Background Verifications, Risk Assessment, and Digital Onboarding. We establish trust while delivering a frictionless experience for you, your employees, customers and partners. Only IDfy combines enterprise-grade technology with business understanding and has the widest breadth of offerings in the industry. With more than 12+ years of experience and 2 million verifications per day, we are pioneers in this industry. Our clients include HDFC Bank, Induslnd Bank, Zomato, Amazon, PhonePe, Paytm, HUL and many others. We have successfully raised $27M from Elev8 Venture Partners, KB Investments & Tenacity Ventures! We work fully onsite on all days of the week from our office in Andheri, Mumbai We are the perfect match if you have- Are having experience of 15+ years along with Experience with cloud-based security management/IDS/IPS/SIEM tools, security vulnerability assessments, encryption, etc Significant knowledge of security best practices for client-server product architectures, focusing predominantly on cloud-based server development Familiarity with Information Security frameworks/standards (i.e. CIS, NIST, SOC2, PCI, GDPR, CCPA, etc) CISM, CISSP, or other Security Certifications. Cloud security certifications on AWS, GCP or Azure. Being a life-long learner; always looking to stay up to date with the latest attack vectors, vulnerabilities, remediation and protection paradigms, etc. Being self-motivated, proactive, driven individual Having strong interpersonal, oral, and written communication skills Ability to work and collaborate in a fast-paced multiple development centres across India. Here's how your day would look like- Primarily leading the IDfy Security, Compliance, and Privacy Practice and Function, ensuring the protection of data, infrastructure, and applications by continuously enhancing and monitoring the robust security framework that has been established, driving compliance with global regulations, and fostering a culture of security-first product development. Defining and owning clear guardrails, alerts, and Security as Code (SaC) deployments to provide 24/7 protection from malicious traffic, vulnerabilities, and other attack vectors Reviewing and analyzing vulnerability data to identify security risks to the organization's network, infrastructure, and applications and determine any reported vulnerabilities that are false positives. Building and maintaining monitoring, auditing, and reporting frameworks that produce artifacts that support security and compliance needs Developing processes that produce artifacts that support security and compliance requirements Working with other infrastructure, DevOps, and application engineers to understand product and business needs Participating in enterprise compliance audits as a security SME. Mentoring team members and co-workers on security best practices. Whatβs it like working at IDfy? We build products that detect and prevent fraud. At IDfy, you will apply your skills to stay one step ahead of fraudsters. You will be mind-mapping fraudstersβ modus operandi, predicting the evolution of fraud techniques, and designing solutions to prevent new & emerging fraud. At IDfy, you will work on the entire end-to-end solution rather than a small cog of a giant wheel. Thanks to our problem-centric approach, one in which we find the right technology to solve a problem rather than the other way around, you will always be working on the latest technologies. We work hard and party hard. There are weekly sessions on emerging technologies. Work weeks are usually capped off with board games, poker, karaoke, and other fun activities. Show more Show less
Posted 1 day ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Our world is transforming, and PTC is leading the way. Our software brings the physical and digital worlds together, enabling companies to improve operations, create better products, and empower people in all aspects of their business. Our people make all the difference in our success. Today, we are a global team of nearly 7,000 and our main objective is to create opportunities for our team members to explore, learn, and grow β all while seeing their ideas come to life and celebrating the differences that make us who we are and the work we do possible. Role Overview We are seeking a talented and detail-oriented QA Automation Engineer to join our team in building robust automated testing solutions for our SaaS microservices platform. You will play a key role in ensuring the quality, performance, and reliability of our services deployed on Kubernetes, working in a collaborative Agile environment. Key Responsibilities Design and implement automated tests for RESTful APIs using Rest-Assured and other modern frameworks. Build and maintain CI/CD pipelines using tools such as GitLab CI, Jenkins, or equivalent. Execute automated tests in Kubernetes environments and integrate them into the deployment lifecycle. Monitor application health and test metrics using observability tools like Datadog. Collaborate with cross-functional teams to adopt and implement new testing strategies and technologies. Contribute to the evolution of QA standards, best practices, and technical direction. Validate backend data and perform SQL-based operations to ensure data integrity and consistency. Preferred Skills, Knowledge, And Experience Strong understanding of test automation for microservices and distributed systems. Proficient in API automation testing using Rest-Assured, Postman, or similar tools. Solid experience with Java or any OOP language Familiarity with modern QA methodologies including contract testing, and the test pyramid. Hands-on experience with Rest-Assured, TestNG, and Cucumber. Strong SQL skills and experience working with relational databases for test validation. Agile/Scrum development experience with strong collaboration and communication skills. Passion for writing clean, maintainable, and scalable test code. Nice to Have Experience with BDD frameworks such as Cucumber, and build tools like Maven or Gradle. Proven experience in building and maintaining CI/CD pipelines using GitLab, Jenkins, or similar tools. Familiarity with cloud platforms (AWS, GCP, or Azure) and container orchestration using Kubernetes. Experience with Web applications and frontend testing frameworks Life at PTC is about more than working with todayβs most cutting-edge technologies to transform the physical world. Itβs about showing up as you are and working alongside some of todayβs most talented industry leaders to transform the world around you. If you share our passion for problem-solving through innovation, youβll likely become just as passionate about the PTC experience as we are. Are you ready to explore your next career move with us? We respect the privacy rights of individuals and are committed to handling Personal Information responsibly and in accordance with all applicable privacy and data protection laws. Review our Privacy Policy here." Show more Show less
Posted 1 day ago
0.0 - 1.0 years
0 Lacs
Visakhapatnam, Andhra Pradesh
On-site
Responsibilities: Design, develop, and maintain Python-based applications and services. Collaborate with cross-functional teams to define, design, and develop new features. Write clean, maintainable, and testable code following best practices. Troubleshoot, debug, and optimize existing software. Participate in code reviews and technical discussions. Skills and Requirements Proven experience as a Python Developer (2 years preferred). Strong understanding of Python frameworks such as Django, Flask, or FastAPI. Experience with RESTful APIs, databases (PostgreSQL, MySQL, etc.), Experience with cloud platforms (Azure, AWS, or GCP). Exposure to machine learning or data processing libraries (Pandas, NumPy, etc.). Bachelorβs degree in Data Science, Computer Science, Information Systems or a related field. Excellent verbal and written communication. Good To Have (Optional) Familiarity with front-end technologies (HTML, CSS, JavaScript) is a plus. Understanding and experience on Gen AI implementations. Experience on Langchain, Vector DB, embeddings or related frameworks. Job Type: Full-time Pay: βΉ25,000.00 - βΉ35,000.00 per month Benefits: Health insurance Paid time off Provident Fund Schedule: Fixed shift Ability to commute/relocate: Visakhapatnam, Andhra Pradesh: Reliably commute or planning to relocate before starting work (Required) Application Question(s): Did you complete any certification on Python? If yes, list the certifications. Education: Bachelor's (Preferred) Experience: Python: 1 year (Required) Expected Start Date: 21/07/2025
Posted 1 day ago
5.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Design and develop comprehensive automation frameworks for UI, API, Microservices, Integration, and performance testing, and contribute, debug, troubleshoot, and optimize existing automation scripts and frameworks Write high-quality, maintainable, and scalable automation scripts using Java, Cucumber, and Selenium, and design end-to-end automation frameworks for API testing with tools like Postman, Rest Assured, or equivalent Participate in agile ceremonies process and deliver stories/features according to the schedule Collaborate with cross-functional teams (Business, Product, Development, DevOps, QA) to identify and automate testing processes across the software lifecycle and create comprehensive test plans At time lead testing teams or work independently, fully owning project delivery, including both manual and automated testing tasks. Work effectively with onshore and offshore partners to ensure seamless integration and execution of testing activities Integrate automation frameworks with CI/CD pipelines to enable continuous testing and faster releases Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications 5+ years of hands-on experience in developing and maintaining automation frameworks and tools Experience in API testing and automation using tools like Rest, Postman, or equivalent Experience with CI/CD tools such as Jenkins, GitLab CI/CD, or GitHub Actions Experienced in writing efficient SQL queries for database interactions Experience with cloud platforms AWS or GCP, Containerization (e.g., Docker/Kubernetes), and microservices architecture Experience with writing efficient SQL queries, NoSQL for database interactions Experience working with cross-functional teams, including developers, business analysts, and product managers Expertise in using and implementing test automation and of end-to-end automation framework using Selenium, Junit, Mockito, Cucumber Cypress, Playwright, Appium, or similar Solid understanding of testing methodologies (e.g., TDD, BDD, data-driven testing) Solid knowledge of Java, including advanced concepts like multithreading, collections, and exception handling. Experience with event-driven architecture, microservices Solid analytical skills to identify, analyze, and solve complex technical problems At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission. Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
New Delhi, Delhi, India
Remote
Job Title: Enterprise Sales Executive β Cloud Services (Azure, AWS, GCP) Location: [Remote] Experience Required: 5+ years Experience Job Summary: We are seeking a dynamic and results-driven Enterprise Sales Executive with proven experience in field sales for cloud services (Azure, AWS, and/or GCP). The ideal candidate will be responsible for identifying, developing, and closing opportunities with mid-size and large enterprises. You will work closely with technical and pre-sales teams to deliver tailored cloud solutions to meet clientsβ business objectives. Key Responsibilities: Drive revenue growth by acquiring new enterprise clients and expanding cloud business within existing accounts. Identify and target high-value customers in assigned territory through field visits, meetings, and networking events. Lead consultative sales engagements to understand client challenges and recommend Azure, AWS, or GCP solutions accordingly. Manage the complete sales lifecycle: prospecting, needs analysis, proposal development, negotiations, and deal closure. Collaborate with cloud architects, technical consultants, and solution engineers to deliver tailored demos and proposals. Maintain a deep understanding of cloud technologies, pricing models, and emerging trends. Maintain strong CRM hygiene and accurately forecast opportunities in the sales pipeline. Represent the company at industry events, seminars, and client briefings. Required Skills and Qualifications: Bachelorβs degree in Business, IT, or related field; MBA preferred. 5+ years of enterprise field sales experience in IT or cloud infrastructure. Hands-on experience selling Azure, AWS, or GCP cloud solutions to mid to large enterprises. Excellent communication, negotiation, and presentation skills. Contact : Nikhil.vats@multiversetech.com Show more Show less
Posted 1 day ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Internovo Ventures is a cutting-edge fintech startup driving innovation through its four flagship products: Indirow, Indivest, Indibook and Indihomes. Our solutions empower businesses and individuals with seamless financial operations, smarter investments, and streamlined bookkeeping. Job Summary: We are seeking a passionate and motivated fresher for Full Stack Web Developer to join our growing team. The ideal candidate will possess a solid foundation in both front-end and back-end development, a keen eye for design, and a strong desire to learn and grow in a fast-paced environment. Key Responsibilities: Design and develop responsive websites and web applications using HTML, CSS, JavaScript, and TypeScript . Build and maintain front-end interfaces using React.js / Next.js . Develop scalable backend services and APIs using Node.js / Express.js . Manage databases such as MySQL, PostgreSQL, and MongoDB , ensuring optimal performance and data security. Implement website SEO optimization and integrate analytics tools. Create and translate UI/UX designs from Figma into functional web pages. Ensure cross-browser compatibility and mobile responsiveness across all web platforms. Utilize Git/GitHub for version control and collaborative development. Deploy and manage applications on cloud platforms like AWS, Azure, or Google Cloud . Work with basic Linux commands for server-side operations and deployment. Participate in code reviews and follow clean code principles and industry best practices. Collaborate effectively with marketing an product managers. Required Technical Skills: HTML5, CSS3, JavaScript, TypeScript Responsive Web Design Figma / Web Design tools React.js, Next.js Node.js, Express.js Database Management (MySQL, PostgreSQL, MongoDB) Git, GitHub Basic knowledge of Linux CLI Familiarity with cloud platforms (AWS, Azure, GCP) Understanding of SEO optimization and web analytics MERN stack experience Basic knowledge of Python Soft Skills: Strong verbal and written communication skills Problem-solving mindset with a keen eye for detail Creative and innovative thinker Ability to adapt quickly in a dynamic, fast-paced environment Eager to learn and open to taking on new responsibilities Team player with excellent collaboration skills Ability to understand and align with business needs and goals Preferred / Bonus Skills (Plus Points): Experience or knowledge of AI integration Exposure to or contribution to Live Projects Qualifications: Bachelorβs degree in Engineering/Technology (B.E./B.Tech) or equivalent in Computer Science, IT, or related field. Freshers are welcome. The ideal candidate should be from Mumbai. Show more Show less
Posted 1 day ago
12.0 years
0 Lacs
Pune, Maharashtra, India
On-site
This role is for one of the Weekday's clients Salary range: Rs 3000000 - Rs 4500000 (ie INR 30-45 LPA) Min Experience: 12 years Location: Pune JobType: full-time We are seeking a highly accomplished Principal Solution Architect to spearhead the design, development, and implementation of complex cloud-based data solutions. This role is critical in shaping end-to-end data strategies, leading modernization initiatives, and delivering cutting-edge solutions that integrate GenAI and LLM technologies across multiple cloud platforms such as AWS, Azure, and GCP. The ideal candidate brings deep technical expertise, strategic leadership, and a proven track record in enterprise data architecture. Requirements Key Responsibilities: Solution Design & Architecture: Architect and lead the development of scalable, secure, and high-performance data platformsβincluding data lakes, warehouses, data mesh, and streaming pipelinesβacross cloud environments (AWS, Azure, GCP). Client Engagement & Pre-Sales: Collaborate with clients to understand their business needs, translate requirements into viable technical solutions, and support pre-sales efforts through proposal development, solution presentations, and technical demos. Data Strategy & Innovation: Champion cloud data modernization and AI-driven strategies by incorporating cloud-native services, big data tools, GenAI, and LLMs to unlock transformative value. Cross-Industry Impact: Apply best practices in data architecture across domains like BFSI, Retail, Manufacturing, and Supply Chain to ensure scalable and industry-relevant solutions. Required Qualifications & Skills: Experience: Minimum 15 years in IT with significant exposure to data architecture, data engineering, and enterprise-grade solution design. Experience in a principal or lead architect capacity is essential. Cloud Expertise: Azure: Proficiency in Microsoft Fabric, Data Lake, Power BI, Data Factory, Azure Purview; good understanding of Azure Service Foundry, Agentic AI, and Copilot. GCP: Knowledge of BigQuery, Vertex AI, Gemini, and related services. AWS: Familiarity with core services for building secure and scalable data platforms. Data & AI Leadership: Demonstrated ability to design data solutions that integrate advanced AI/ML components including Generative AI and large language models (LLMs). Communication & Leadership: Strong presentation, stakeholder management, and team leadership capabilities. Able to lead multi-disciplinary teams and engage with executive-level clients. Problem-Solving & Strategic Thinking: Ability to address complex business problems with innovative and scalable data solutions. Education: Bachelor's or Master's degree in Computer Science, Engineering, Information Technology, or a related field. Preferred Qualifications: Certifications in AWS, Azure, GCP, Snowflake, or Databricks. Exposure to Agentic AI, intelligent automation, and emerging AI trends. Key Skills: Cloud Architecture | Data Engineering | Azure | GCP | AWS | Data Lakes | Data Warehousing | GenAI | LLMs | Solution Design | Pre-Sales | AI/ML Integration | Big Data | Client Engagement | Strategic Leadership Show more Show less
Posted 1 day ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Medicine moves too slow. At Velsera, we are changing that. Velsera was formed in 2023 through the shared vision of Seven Bridges and Pierian, with a mission to accelerate the discovery, development, and delivery of life-changing insights. Velsera provides software and professional services for: AI-powered multimodal data harmonization and analytics for drug discovery and development IVD development, validation, and regulatory approval Clinical NGS interpretation, reporting, and adoption With our headquarters in Boston, MA, we are growing and expanding our teams located in different countries! What will you do? Train, fine-tune, and deploy Large Language Models (LLMs) to solve real-world problems effectively Design, implement, and optimize AI/ML pipelines to support model development, evaluation, and deployment Collaborate with Architect, software engineers, and product teams to integrate AI solutions into applications Ensure model performance, scalability, and efficiency through continuous experimentation and improvements Work on LLM optimization techniques, including Retrieval-Augmented Generation (RAG), prompt tuning, etc Manage and automate the infrastructure necessary for AI/ML workloads while keeping the focus on model development Work with DevOps teams to ensure smooth deployment and monitoring of AI models in production Stay updated on the latest advancements in AI, LLMs, and deep learning to drive innovation What do you bring to the table? Strong experience in training, fine-tuning, and deploying LLMs using frameworks like PyTorch, TensorFlow, or Hugging Face Transformers Hands-on experience in developing and optimizing AI/ML pipelines, from data preprocessing to model inference Solid programming skills in Python and familiarity with libraries like NumPy, Pandas, and Scikit-learn Strong understanding of tokenization, embeddings, and prompt engineering for LLM-based applications Hands-on experience in building and optimizing RAG pipelines using vector databases (FAISS, Pinecone, Weaviate, or ChromaDB) Experience with cloud-based AI infrastructure (AWS, GCP, or Azure) and containerization technologies (Docker, Kubernetes) Experience in model monitoring, A/B testing, and performance optimization in a production environment Familiarity with MLOps best practices and tools (Kubeflow, MLflow, or similar) Ability to balance hands-on AI development with necessary infrastructure management Strong problem-solving skills, teamwork, and a passion for building AI-driven solutions Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Job: About Client: Our Client is a global IT services company headquartered in Southborough, Massachusetts, USA. Founded in 1996, with a revenue of $1.8B, with 35,000+ associates worldwide, specializes in digital engineering, and IT services company helping clients modernize their technology infrastructure, adopt cloud and AI solutions, and accelerate innovation. It partners with major firms in banking, healthcare, telecom, and media. Our Client is known for combining deep industry expertise with agile development practices, enabling scalable and cost-effective digital transformation. The company operates in over 50 locations across more than 25 countries, has delivery centers in Asia, Europe, and North America and is backed by Baring Private Equity Asia. Job Title : Python UI Developer Key Skills : Python , TypeScript, JavaScript, Locations : Hyderabad Experience : - 5-6 Years Education Qualification : Any Graduation Work Mode : 5 Days work from Office. Employment Type : Contract to Hire Notice Period : Immediate - 10 Days. Job Description: Strong hands-on experience in Python development UI expertise in at least one of the following: TypeScript, JavaScript, or Angular (mandatory) Good to Have: Experience with Google Cloud Platform (GCP) Show more Show less
Posted 1 day ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About company: Our client is prominent Indian multinational corporation specializing in information technology (IT), consulting, and business process services and its headquartered in Bengaluru with revenues of gross revenue of βΉ222.1 billion with global work force of 234,054 and listed in NASDAQ and it operates in over 60 countries and serves clients across various industries, including financial services, healthcare, manufacturing, retail, and telecommunications. The company consolidated its cloud, data, analytics, AI, and related businesses under the tech services business line. Major delivery centers in India, including cities like Chennai, Pune, Hyderabad, and Bengaluru, kochi, kolkatta, Noida. Β· Job Title: Java Fullstack Developer Β· Location: Hyderabad / Pan India(Hybrid) Β· Experience: 6 - 9 yrs Β· Job Type : Contract to hire. Β· Notice Period:- Immediate joiners. Mandatory Skills: Java full stack with React AWS and GCP cloud Kafka, SQL db, Show more Show less
Posted 1 day ago
15.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About the role: We are looking for a dynamic and results-driven Pre-Sales Lead to drive our Google Cloud pre-sales strategy. This leadership role involves managing a team of pre-sales engineers, collaborating closely with sales to identify and win new business. The ideal candidate will possess deep technical expertise in Google Cloud Platform (GCP), with a strong focus on Application Modernization, Machine Learning (ML), and Artificial Intelligence (AI). Role: Presales Leader Location: Pune/Noida Experience: 15+ years Job Type: Full time Employment What youβll do: Pre-Sales Pipeline Growth: Proactively identify and qualify new business opportunities to contribute to an increase in the pre-sales pipeline value each quarter. Partner with sales to grow pipeline and close strategic deals. Customer Satisfaction Score: Develop innovative GCP-based solutions tailored to customer needs. Ensure high-quality engagements that result in increasing customer satisfaction score. POC Success Rate: Conduct and manage Proof of Concepts (POCs) with a good success rate. Response Time to RFPs/RFIs: Ensure timely and accurate responses to Requests for Proposals (RFPs) and Requests for Information (RFIs) within A hours/days of receipt. Ensure high-quality delivery of pre-sales engagements. Team Productivity Metrics: Lead and mentor a team of pre-sales specialists. Contribute to and monitor team productivity through defined metrics such as number of demos delivered and proposals authored. Expertise You'll Bring: Bachelorβs degree in Computer Science, Engineering, or a related technical field. 15+ years of experience in pre-sales, technical sales, or solution architecture, with a strong focus on cloud technologies. In-depth expertise in Google Cloud Platform (GCP) and its core services. Proven ability to design and present solutions in Application Modernization, Data Analytics, Machine Learning, and AI. Excellent communication skills with the ability to convey complex technical concepts to diverse audiences. Strong leadership background with experience mentoring and managing technical teams. Exceptional problem-solving and analytical abilities. Strong interpersonal skills and a customer-centric approach. Inclusive Environment: Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. We offer hybrid work options and flexible working hours to accommodate various needs and preferences. Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. If you are a person with disabilities and have specific requirements, please inform us during the application process or at any time during your employment. We are committed to creating an inclusive environment where all employees can thrive. Our company fosters a values-driven and people-centric work environment that enables our employees to: Accelerate growth, both professionally and personally Impact the world in powerful, positive ways, using the latest technologies. Enjoy collaborative innovation, with diversity and work-life wellbeing at the core. Unlock global opportunities to work and learn with the industryβs best. Letβs unleash your full potential at Persistent. βPersistent is an Equal Opportunity Employer and prohibits discrimination and harassment of any kind.β Show more Show less
Posted 1 day ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Join DAZN β The Ultimate Sports Streaming Experience! DAZN is revolutionizing the way fans experience sports with cutting-edge streaming technology. As we continue to innovate, we are looking for a Solutions Architect β Streaming & OTT to design and optimize high-performance video streaming architectures. If you have 10+ years of experience in streaming/OTT solutions, encoding, CDN distribution, and playback services, weβd love to hear from you! π© Interested? Apply now by sharing your updated resume, current & expected CTC, and notice period. Letβs shape the future of sports streaming together! π Job Title: Solutions Architect β Streaming & OTT Location: Hyderabad Role Overview: We are looking for an experienced Solutions Architect β Streaming & OTT to design, optimize, and support scalable, high-performance video streaming architectures. The ideal candidate will have a deep understanding of end-to-end streaming workflows, encoding/transcoding pipelines, packaging, CDN distribution, and playback services while ensuring seamless content delivery across a variety of devices and platforms. Key Responsibilities: Architect and implement end-to-end streaming solutions , ensuring high availability, low latency, and scalability. Define technical roadmaps for streaming infrastructure, aligning with business and operational goals. Optimize video encoding/transcoding pipelines for live and VOD content, ensuring optimal compression efficiency without quality loss. Design and implement adaptive bitrate (ABR) streaming strategies to optimize playback across different devices and network conditions. Architect and integrate multi-CDN strategies , ensuring resilience, redundancy, and global distribution efficiency. Design and oversee OTT packaging workflows (HLS, DASH, CMAF) and DRM integration for content security. Provide third-line technical support for streaming technologies, debugging complex playback, latency, and delivery issues. Work closely with backend, player, and DevOps teams to ensure seamless integration of playback services and analytics solutions . Stay ahead of emerging trends and advancements in streaming technology, contributing to strategic initiatives and innovation. Technical Expertise Required: 10+ years of experience in streaming/OTT industry , with a focus on solution architecture and design . Proven track record in designing and deploying scalable, high-performance streaming solutions . Hands-on expertise in video encoding/transcoding (FFmpeg, AWS Media Services, Elemental, Harmonic, etc.). Strong knowledge of OTT packaging standards (HLS, MPEG-DASH, CMAF) and DRM solutions (Widevine, FairPlay, PlayReady). Experience working with Content Delivery Networks (CDNs) (Akamai, CloudFront, Fastly, etc.) and designing multi-CDN architectures . Deep understanding of video player technologies, ABR streaming, and low-latency playback optimizations . Experience in designing and maintaining backend playback services with APIs for content discovery, recommendations, and analytics. Familiarity with cloud-based media workflows (AWS, GCP, Azure) and Infrastructure as Code (IaC) methodologies. Proficiency in networking, HTTP streaming protocols (RTMP, HLS, DASH), and caching strategies for optimal content delivery. Experience with monitoring and troubleshooting tools (QoE/QoS analytics, log aggregators, and network diagnostics). Bonus: Prior experience in live sports streaming with expertise in ultra-low latency streaming (WebRTC, LL-HLS, CMAF-CTE) Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
India
Remote
Client Type: US Client Location: Remote About the Role Weβre creating a new certification: Google AI Ecosystem Architect (Gemini & DeepMind) - Subject Matter Expert . This course is designed for technical learners who want to understand and apply the capabilities of Googleβs Gemini models and DeepMind technologies to build powerful, multimodal AI applications. Weβre looking for a Subject Matter Expert (SME) who can help shape this course from the ground up. Youβll work closely with a team of learning experience designers, writers, and other collaborators to ensure the course is technically accurate, industry-relevant, and instructionally sound. Responsibilities As the SME, youβll partner with learning experience designers and content developers to: Translate real-world Gemini and DeepMind applications into accessible, hands-on learning for technical professionals. Guide the creation of labs and projects that allow learners to build pipelines for image-text fusion, deploy Gemini APIs, and experiment with DeepMindβs reinforcement learning libraries. Contribute technical depth across activities, from high-level course structure down to example code, diagrams, voiceover scripts, and data pipelines. Ensure all content reflects current, accurate usage of Googleβs multimodal tools and services. Be available during U.S. business hours to support project milestones, reviews, and content feedback. This role is an excellent fit for professionals with deep experience in AI/ML, Google Cloud, and a strong familiarity with multimodal systems and the DeepMind ecosystem. Essential Tools & Platforms A successful SME in this role will demonstrate fluency and hands-on experience with the following: Google Cloud Platform (GCP) Vertex AI (particularly Gemini integration, model tuning, and multimodal deployment) Cloud Functions, Cloud Run (for inference endpoints) BigQuery and Cloud Storage (for handling large image-text datasets) AI Platform Notebooks or Colab Pro Google DeepMind Technologies JAX and Haiku (for neural network modeling and research-grade experimentation) DeepMind Control Suite or DeepMind Lab (for reinforcement learning demonstrations) RLax or TF-Agents (for building and modifying RL pipelines) AI/ML & Multimodal Tooling Gemini APIs and SDKs (image-text fusion, prompt engineering, output formatting) TensorFlow 2.x and PyTorch (for model interoperability) Label Studio, Cloud Vision API (for annotation and image-text preprocessing) Data Science & MLOps DVC or MLflow (for dataset and model versioning) Apache Beam or Dataflow (for processing multimodal input streams) TensorBoard or Weights & Biases (for visualization) Content Authoring & Collaboration GitHub or Cloud Source Repositories Google Docs, Sheets, Slides Screen recording tools like Loom or OBS Studio Required skills and experience: Demonstrated hands-on experience building, deploying, and maintaining sophisticated AI powered applications using Gemini APIs/SDKs within the Google Cloud ecosystem, especially in Firebase Studio and VS Code. Proficiency in designing and implementing agent-like application patterns, including multi-turn conversational flows, state management, and complex prompting strategies (e.g., Chain-of Thought, few-shot, zero-shot). Experience integrating Gemini with Google Cloud services (Firestore, Cloud Functions, App Hosting) and external APIs for robust, production-ready solutions. Proven ability to engineer applications that process, integrate, and generate content across multiple modalities (text, images, audio, video, code) using Geminiβs native multimodal capabilities. Skilled in building and orchestrating pipelines for multimodal data handling, synchronization, and complex interaction patterns within application logic. Experience designing and implementing production-grade RAG systems, including integration with vector databases (e.g., Pinecone, ChromaDB) and engineering data pipelines for indexing and retrieval. Ability to manage agent state, memory, and persistence for multi-turn and long-running interactions. Proficiency leveraging AI-assisted coding features in Firebase Studio (chat, inline code, command execution) and using App Prototyping agents or frameworks like Genkit for rapid prototyping and structuring agentic logic. Strong command of modern development workflows, including Git/GitHub, code reviews, and collaborative development practices. Experience designing scalable, fault-tolerant deployment architectures for multimodal and agentic AI applications using Firebase App Hosting, Cloud Run, or similar serverless/cloud platforms. Advanced MLOps skills, including monitoring, logging, alerting, and versioning for generative AI systems and agents. Deep understanding of security best practices: prompt injection mitigation (across modalities), secure API key management, authentication/authorization, and data privacy. Demonstrated ability to engineer for responsible AI, including bias detection, fairness, transparency, and implementation of safety mechanisms in agentic and multimodal applications. Experience addressing ethical challenges in the deployment and operation of advanced AI systems. Proven success designing, reviewing, and delivering advanced, project-based curriculum and hands-on labs for experienced software developers and engineers. Ability to translate complex engineering concepts (RAG, multimodal integration, agentic patterns, MLOps, security, responsible AI) into clear, actionable learning materials and real world projects. 5+ years of professional experience in AI-powered application development, with a focus on generative and multimodal AI. Strong programming skills in Python and JavaScript/TypeScript; experience with modern frameworks and cloud-native development. Bachelorβs or Masterβs degree in Computer Science, Data Engineering, AI, or a related technical field. Ability to explain advanced technical concepts (e.g., fusion transformers, multimodal embeddings, RAG workflows) to learners in an accessible way. Strong programming experience in Python and experience deploying machine learning pipelines Ability to work independently, take ownership of deliverables, and collaborate closely with designers and project managers Preferred: Experience with Google DeepMind tools (JAX, Haiku, RLax, DeepMind Control Suite/Lab) and reinforcement learning pipelines. Familiarity with open data formats (Delta, Parquet, Iceberg) and scalable data engineering practices. Prior contributions to open-source AI projects or technical community engagement. Show more Show less
Posted 1 day ago
15.0 years
0 Lacs
Nagpur, Maharashtra, India
On-site
Job description Job Title: Tech Lead (AI/ML) β Machine Learning & Generative AI Location: Nagpur (Hybrid / On-site) Experience: 8β15 years Employment Type: Full-time Job Summary: We are seeking a highly experienced Python Developer with a strong background in traditional Machine Learning and growing proficiency in Generative AI to join our AI Engineering team. This role is ideal for professionals who have delivered scalable ML solutions and are now expanding into LLM-based architectures, prompt engineering, and GenAI productization. Youβll be working at the forefront of applied AI, driving both model performance and business impact across diverse use cases. Key Responsibilities: Design and develop ML-powered solutions for use cases in classification, regression, recommendation, and NLP. Build and operationalize GenAI solutions, including fine-tuning, prompt design, and RAG implementations using models such as GPT, LLaMA, Claude, or Gemini. Develop and maintain FastAPI-based services that expose AI models through secure, scalable APIs. Lead data modeling, transformation, and end-to-end ML pipelines, from feature engineering to deployment. Integrate with relational (MySQL) and vector databases (e.g., ChromaDB, FAISS, Weaviate) to support semantic search, embedding stores, and LLM contexts. Mentor junior team members and review code, models, and system designs for robustness and maintainability. Collaborate with product, data science, and infrastructure teams to translate business needs into AI capabilities. Optimize model and API performance, ensuring high availability, security, and scalability in production environments. Core Skills & Experience: Strong Python programming skills with 5+ years of applied ML/AI experience. Demonstrated experience building and deploying models using TensorFlow, PyTorch, scikit-learn, or similar libraries. Practical knowledge of LLMs and GenAI frameworks, including Hugging Face, OpenAI, or custom transformer stacks. Proficient in REST API design using FastAPI and securing APIs in production environments. Deep understanding of MySQL (query performance, schema design, transactions). Hands-on with vector databases and embeddings for search, retrieval, and recommendation systems. Strong foundation in software engineering practices: version control (Git), testing, CI/CD. Preferred/Bonus Experience: Deployment of AI solutions on cloud platforms (AWS, GCP, Azure). Familiarity with MLOps tools (MLflow, Airflow, DVC, SageMaker, Vertex AI). Experience with Docker, Kubernetes, and container orchestration. Understanding of prompt engineering, tokenization, LangChain, or multi-agent orchestration frameworks. Exposure to enterprise-grade AI applications in BFSI, healthcare, or regulated industries is a plus. What We Offer: Opportunity to work on a cutting-edge AI stack integrating both classical ML and advanced GenAI. High autonomy and influence in architecting real-world AI solutions. A dynamic and collaborative environment focused on continuous learning and innovation. Show more Show less
Posted 1 day ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Our client is prominent Indian multinational corporation specializing in information technology (IT), consulting, and business process services and its headquartered in Bengaluru with revenues of gross revenue of βΉ222.1 billion with global work force of 234,054 and listed in NASDAQ and it operates in over 60 countries and serves clients across various industries, including financial services, healthcare, manufacturing, retail, and telecommunications. The company consolidated its cloud, data, analytics, AI, and related businesses under the tech services business line. Major delivery centers in India, including cities like Chennai, Pune, Hyderabad, and Bengaluru, kochi, kolkatta, Noida. Β· Job Title: Mainframe Testing Β· Location: Chennai , Pune ( Hybrid ) Β· Experience: 6 + yrs Β· Job Type : Contract Β· Notice Period:- Immediate joiners. Mandatory Skills: Mainframe Testing Z/OS Mainframe, JCL, DB2, IBM Utilities, TSO/ISPF commands Good to have Technical Skills : Cloud Infrastructure Testing (AWS/Azure/GCP), Test Environment Management, Service Job Description : Should have 6+ yrs experience in Testing life cycle process, creation of test cases/data/execution as per requirement/design Should have a good knowledge in editing JCL or create JCL to submit the Test Batch Jobs Should be aware of TSO/ISPF commands in Mainframe β’ Good knowledge in analyzing the logs in Spool for Abended jobs and provide the root cause of the issue for further analysis to Development/support team β’ Work with IT Developer to analyze the COBOL program to analyze issue and identify input and Output files β’ Able to edit Mainframe files using Layouts/Copybooks using Fileaid to modify data according to testing requirements Verify the Database in DB2 or output files to verify the outputs Test data preparation according to Test Requirements β’ Experienced in STLC Lifecycle (Software Testing Life Cycle) or Agile methodology and prepare Test closure reports/Signoff for Testing Key Responsibilities : Creation of Test Strategy/Test plan document to define scope and approach of testing (Applicable for Band B3) Analyze the Requirements and identify Test scenarios/design the Test cases Prepare the Test data/Test JCL according to test scenarios Execute Test cases by submit Jobs and analyze the results Report the issues and coordinate with Development/support team for fixing the errors Participate in capability building and upskilling programs, contribute towards training programs in practice. Supporting practice associates in respective domains with relevant expertise Show more Show less
Posted 1 day ago
6.0 - 10.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Role Summary: Looking for one data scientist engineer with Strong experience in AI/ML, Data collection preprocessing, estimation, Architecture creation Responsibility : Model Development: Design and implement ML models to tackle complex business challenges. Data Preprocessing: Clean, preprocess, and analyze large datasets for meaningful insights and model features. Model Training: Train and fine-tune ML models using various techniques including deep learning and ensemble methods. Evaluation and Optimization: Assess model performance, optimize for accuracy, efficiency, and scalability. Deployment: Deploy ML models in production, monitor performance for reliability. Collaboration: Work with data scientists, engineers, and stakeholders to integrate ML solutions. Research: Stay updated on ML/AI advancements, contribute to internal knowledge. Documentation: Maintain comprehensive documentation for all ML models and processes. β’ Qualification - Bachelor's or masterβs in computer science, Machine Learning, Data Science, or a related field and must be experience of 6-10 years. β’ Desirable Skills: Must Have 1. Experience in timeseries forecasting, regression Model, Classification Model 2. Python , R, Data analysis 3. Large size data handling with Panda , Numpy and Matplot Lib 4. Version Control: Git or any other 5. ML Framework: Hands on exp in Tensorflow, Pytorch, Scikit-Learn, Keras 6. Good knowledge on Cloud platform and ( AWS/ AZure/ GCP), Docker kubernetis 7. Model Selection, evaluation, Deployment, Data collection and preprocessing, Feature engineering Estimation Good to Have Experience with Big Data and analytics using technologies like Hadoop, Spark, etc. Additional experience or knowledge in AI/ML technologies beyond the mentioned frameworks. BFSI and banking domain Base Location: Noida, but flexible to travel, coming to office is mandate twice in a week Show more Show less
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The job market for Google Cloud Platform (GCP) professionals in India is rapidly growing as more and more companies are moving towards cloud-based solutions. GCP offers a wide range of services and tools that help businesses in managing their infrastructure, data, and applications in the cloud. This has created a high demand for skilled professionals who can work with GCP effectively.
The average salary range for GCP professionals in India varies based on experience and job role. Entry-level positions can expect a salary range of INR 5-8 lakhs per annum, while experienced professionals can earn anywhere from INR 12-25 lakhs per annum.
Typically, a career in GCP progresses from a Junior Developer to a Senior Developer, then to a Tech Lead position. As professionals gain more experience and expertise in GCP, they can move into roles such as Cloud Architect, Cloud Consultant, or Cloud Engineer.
In addition to GCP, professionals in this field are often expected to have skills in: - Cloud computing concepts - Programming languages such as Python, Java, or Go - DevOps tools and practices - Networking and security concepts - Data analytics and machine learning
As the demand for GCP professionals continues to rise in India, now is the perfect time to upskill and pursue a career in this field. By mastering GCP and related skills, you can unlock numerous opportunities and build a successful career in cloud computing. Prepare well, showcase your expertise confidently, and land your dream job in the thriving GCP job market in India.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.