Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
6.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Description Oracle Customer Success Services Building on the mindset that "Who knows Oracle …. better than Oracle?" Oracle Customer Success Services assists customers with their requirements for some of the most cutting-edge applications and solutions by utilizing the strengths of more than two decades of expertise in developing mission-critical solutions for enterprise customers and combining it with cutting-edge technology to provide our customers' speed, flexibility, resiliency, and security to enable customers to optimize their investment, minimize risk, and achieve more. The business was established with an entrepreneurial mindset and supports a vibrant, imaginative, and highly varied workplace. We are free of obligations, so we'll need your help to turn it into a premier engineering hub that prioritizes quality. Why? Oracle Customer Success Services Engineering is responsible for designing, building, and managing cutting-edge solutions, services, and core platforms to support the managed cloud business including but not limited to Oracle Cloud Infrastructure (OCI), Oracle Cloud Applications (SaaS) & Oracle Enterprise Applications. This position is for CSS Architecture Team, and we are searching for the finest and brightest technologists as we begin on the road of cloud-native digital transformation. We operate under a garage culture, rely on cutting-edge technology in our daily work, and provide a highly innovative, creative, and experimental work environment. We prefer to innovate and move quickly, putting a strong emphasis on scalability and robustness. We need your assistance to build a top-tier engineering team that has a significant influence. What? As a Principal Data Science & AIML Engineer within the CSS CDO Architecture & Platform team, you’ll lead efforts in designing and building scalable, distributed, resilient services that provide artificial intelligence and machine learning capabilities on OCI & Oracle Cloud Applications for the business. You will be responsible for the design and development of machine learning systems and applications, ensuring they meet the needs of our clients and align with the company's strategic objectives. The ideal candidate will have extensive experience in machine learning algorithms, model creation and evaluation, data engineering and data processing for large scale distributed systems, and software development methodologies. We strongly believe in ownership and challenging the status quo. We expect you to bring critical thinking and long-term design impact while building solutions and products defining system integrations, and cross-cutting concerns. Being part of the architecture function also provides you with the unique ability to enforce new processes and design patterns that will be future-proof while building new services or products. As a thought leader, you will own and lead the complete SDLC from Architecture Design, Development, Test, Operational Readiness, and Platform SRE. Responsibilities As a member of the architecture team, you will be in charge of designing software products, services, and platforms, as well as creating, testing, and managing the systems and applications we create in line with the architecture patterns and standards. As a core member of the Architecture Chapter, you will be expected to advocate for the adoption of software architecture and design patterns among cross-functional teams both within and outside of engineering roles. You will also be expected to act as a mentor and act in capacity as an advisor to the team(s) within the software and AIML domain. As we push for digital transformation throughout the organization, you will constantly be expected to think creatively and optimize and harmonize business processes. Core Responsibilities Lead the development of machine learning models, integration with full stack software ecosystem, data engineering and contribute to the design strategy. Collaborate with product managers and development teams to identify software requirements and define project scopes. Develop and maintain technical documentation, including architecture diagrams, design specifications, and system diagrams. Analyze and recommend new software technologies and platforms to ensure the company stays ahead of the curve. Work with development teams to ensure software projects are delivered on time, within budget, and to the required quality standards. Provide guidance and mentorship to junior developers. Stay up-to-date with industry trends and developments in software architecture and development practices. Required Qualifications Bachelor's or Master's Degree in Computer Science, Machine Learning/AI, or a closely related field. 6+ years of experience in software development, machine learning, data science, and data engineering design. Proven ability to build and manage enterprise-distributed and/or cloud-native systems. Broad knowledge of cutting-edge machine learning models and strong domain expertise in both traditional and deep learning, particularly in areas such as Recommendation Engines, NLP & Transformers, Computer Vision, and Generative AI. Advanced proficiency in Python and frameworks such as FastAPI, Dapr & Flask or equivalent. Deep experience with ML frameworks such as PyTorch, TensorFlow, and Scikit-learn. Hands-on experience building ML models from scratch, transfer learning, and Retrieval Augmented Generation (RAG) using various techniques (Native, Hybrid, C-RAG, Graph RAG, Agentic RAG, and Multi-Agent RAG). Experience building Agentic Systems with SLMs and LLMs using frameworks like Langgraph + Langchain, AutoGen, LlamaIndex, Bedrock, Vertex, Agent Development Kit, Model Context Protocol (MCP)and Haystack or equivalent. Experience in Data Engineering using data lakehouse stacks such as ETL/ELT, and data processing with Apache Hadoop, Spark, Flink, Beam, and dbt. Experience with Data Warehousing and Lakes such as Apache Iceberg, Hudi, Delta Lake, and cloud-managed solutions like OCI Data Lakehouse. Experience in data visualization and analytics with Apache Superset, Apache Zeppelin, Oracle Analytics Cloud or similar. Hands-on experience working with various data types and storage formats, including NoSQL, SQL, Graph databases, and data serialization formats like Parquet and Arrow. Experience with real-time distributed systems using streaming data with Kafka, NiFi, or Pulsar. Strong expertise in software design concepts, patterns (e.g., 12-Factor Apps), and tools to create CNCF-compliant software with hands-on knowledge of containerization technologies like Docker and Kubernetes. Proven ability to build and deploy software applications on one or more public cloud providers (OCI, AWS, Azure, GCP, or similar). Demonstrated ability to write full-stack applications using polyglot programming with languages/frameworks like FastAPI, Python, and Golang. Experience designing API-first systems with application stacks like FARM and MERN, and technologies such as gRPC and REST. Solid understanding of Design Thinking, Test-Driven Development (TDD), BDD, and end-to-end SDLC. Experience in DevOps practices, including Kubernetes, CI/CD, Blue-Green, and Canary deployments. Experience with Microservice architecture patterns, including API Gateways, Event-Driven & Reactive Architecture, CQRS, and SAGA. Familiarity with OOP design principles (SOLID, DRY, KISS, Common Closure, and Module Encapsulation). Proven ability to design software systems using various design patterns (Creational, Structural, and Behavioral). Strong interpersonal skills and the ability to effectively communicate with business stakeholders. Demonstrated ability to drive technology adoption in AIML Solutions and CNCF software stack. Excellent analytical, problem-solving, communication, and leadership skills. Qualifications Career Level - IC4 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less
Posted 5 days ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Quality Assurance Engineer (1) We are seeking a highly skilled QA engineer to join our team to execute a critical project around Point-of-Sale Data Management and Analytics. The initial term of the contract will be 6 months with the option to extend to another 3 months. The ideal candidate will have a strong background in software testing including smoke testing, unit testing, integration testing, performance testing, etc. We are modernizing our Point-of-Sale (POS) data processing platform with dynamic workflow execution, automated data cleaning, and cloud-native infrastructure. The platform includes: React-based UI for business workflows Python FastAPI backend Workflow automation using Prefect Data ingestion from Excel/CSV (FTP, SharePoint) Key Responsibilities Develop, document, and execute test cases to validate the functionality of the application. Validate Excel/CSV data transformation workflows and integration between frontend, backend, and Prefect workflows Perform functional, integration, regression, and performance testing. Automate test cases using tools such as Selenium, PyTest, or Cypress. Collaborate with developers and stakeholders to define testing strategies and resolve issues. Maintain test scripts and frameworks for automated testing. Ensure high-quality releases by identifying, documenting, and tracking defects in bug tracking tools (e.g., JIRA). Validate workflows, state management, and APIs to ensure system robustness. Qualifications Proficient in testing tools such as Selenium, PyTest, Postman, and JMeter. Strong understanding of RESTful APIs and performance testing tools. Hands-on experience in testing React applications and backend systems developed in Python. Hands-on experience testing Excel/CSV file operations Familiarity with CI/CD pipelines and automated test integration. Basic knowledge of containerized environments (Docker/Kubernetes) and cloud platforms (Azure preferred). Attention to detail and strong analytical skills. Excellent written and verbal communication. Collaborative team player with problem-solving capabilities. Show more Show less
Posted 5 days ago
6.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Description Oracle Customer Success Services Building on the mindset that "Who knows Oracle …. better than Oracle?" Oracle Customer Success Services assists customers with their requirements for some of the most cutting-edge applications and solutions by utilizing the strengths of more than two decades of expertise in developing mission-critical solutions for enterprise customers and combining it with cutting-edge technology to provide our customers' speed, flexibility, resiliency, and security to enable customers to optimize their investment, minimize risk, and achieve more. The business was established with an entrepreneurial mindset and supports a vibrant, imaginative, and highly varied workplace. We are free of obligations, so we'll need your help to turn it into a premier engineering hub that prioritizes quality. Why? Oracle Customer Success Services Engineering is responsible for designing, building, and managing cutting-edge solutions, services, and core platforms to support the managed cloud business including but not limited to Oracle Cloud Infrastructure (OCI), Oracle Cloud Applications (SaaS) & Oracle Enterprise Applications. This position is for CSS Architecture Team, and we are searching for the finest and brightest technologists as we begin on the road of cloud-native digital transformation. We operate under a garage culture, rely on cutting-edge technology in our daily work, and provide a highly innovative, creative, and experimental work environment. We prefer to innovate and move quickly, putting a strong emphasis on scalability and robustness. We need your assistance to build a top-tier engineering team that has a significant influence. What? As a Principal Data Science & AIML Engineer within the CSS CDO Architecture & Platform team, you’ll lead efforts in designing and building scalable, distributed, resilient services that provide artificial intelligence and machine learning capabilities on OCI & Oracle Cloud Applications for the business. You will be responsible for the design and development of machine learning systems and applications, ensuring they meet the needs of our clients and align with the company's strategic objectives. The ideal candidate will have extensive experience in machine learning algorithms, model creation and evaluation, data engineering and data processing for large scale distributed systems, and software development methodologies. We strongly believe in ownership and challenging the status quo. We expect you to bring critical thinking and long-term design impact while building solutions and products defining system integrations, and cross-cutting concerns. Being part of the architecture function also provides you with the unique ability to enforce new processes and design patterns that will be future-proof while building new services or products. As a thought leader, you will own and lead the complete SDLC from Architecture Design, Development, Test, Operational Readiness, and Platform SRE. Responsibilities As a member of the architecture team, you will be in charge of designing software products, services, and platforms, as well as creating, testing, and managing the systems and applications we create in line with the architecture patterns and standards. As a core member of the Architecture Chapter, you will be expected to advocate for the adoption of software architecture and design patterns among cross-functional teams both within and outside of engineering roles. You will also be expected to act as a mentor and act in capacity as an advisor to the team(s) within the software and AIML domain. As we push for digital transformation throughout the organization, you will constantly be expected to think creatively and optimize and harmonize business processes. Core Responsibilities Lead the development of machine learning models, integration with full stack software ecosystem, data engineering and contribute to the design strategy. Collaborate with product managers and development teams to identify software requirements and define project scopes. Develop and maintain technical documentation, including architecture diagrams, design specifications, and system diagrams. Analyze and recommend new software technologies and platforms to ensure the company stays ahead of the curve. Work with development teams to ensure software projects are delivered on time, within budget, and to the required quality standards. Provide guidance and mentorship to junior developers. Stay up-to-date with industry trends and developments in software architecture and development practices. Required Qualifications Bachelor's or Master's Degree in Computer Science, Machine Learning/AI, or a closely related field. 6+ years of experience in software development, machine learning, data science, and data engineering design. Proven ability to build and manage enterprise-distributed and/or cloud-native systems. Broad knowledge of cutting-edge machine learning models and strong domain expertise in both traditional and deep learning, particularly in areas such as Recommendation Engines, NLP & Transformers, Computer Vision, and Generative AI. Advanced proficiency in Python and frameworks such as FastAPI, Dapr & Flask or equivalent. Deep experience with ML frameworks such as PyTorch, TensorFlow, and Scikit-learn. Hands-on experience building ML models from scratch, transfer learning, and Retrieval Augmented Generation (RAG) using various techniques (Native, Hybrid, C-RAG, Graph RAG, Agentic RAG, and Multi-Agent RAG). Experience building Agentic Systems with SLMs and LLMs using frameworks like Langgraph + Langchain, AutoGen, LlamaIndex, Bedrock, Vertex, Agent Development Kit, Model Context Protocol (MCP)and Haystack or equivalent. Experience in Data Engineering using data lakehouse stacks such as ETL/ELT, and data processing with Apache Hadoop, Spark, Flink, Beam, and dbt. Experience with Data Warehousing and Lakes such as Apache Iceberg, Hudi, Delta Lake, and cloud-managed solutions like OCI Data Lakehouse. Experience in data visualization and analytics with Apache Superset, Apache Zeppelin, Oracle Analytics Cloud or similar. Hands-on experience working with various data types and storage formats, including NoSQL, SQL, Graph databases, and data serialization formats like Parquet and Arrow. Experience with real-time distributed systems using streaming data with Kafka, NiFi, or Pulsar. Strong expertise in software design concepts, patterns (e.g., 12-Factor Apps), and tools to create CNCF-compliant software with hands-on knowledge of containerization technologies like Docker and Kubernetes. Proven ability to build and deploy software applications on one or more public cloud providers (OCI, AWS, Azure, GCP, or similar). Demonstrated ability to write full-stack applications using polyglot programming with languages/frameworks like FastAPI, Python, and Golang. Experience designing API-first systems with application stacks like FARM and MERN, and technologies such as gRPC and REST. Solid understanding of Design Thinking, Test-Driven Development (TDD), BDD, and end-to-end SDLC. Experience in DevOps practices, including Kubernetes, CI/CD, Blue-Green, and Canary deployments. Experience with Microservice architecture patterns, including API Gateways, Event-Driven & Reactive Architecture, CQRS, and SAGA. Familiarity with OOP design principles (SOLID, DRY, KISS, Common Closure, and Module Encapsulation). Proven ability to design software systems using various design patterns (Creational, Structural, and Behavioral). Strong interpersonal skills and the ability to effectively communicate with business stakeholders. Demonstrated ability to drive technology adoption in AIML Solutions and CNCF software stack. Excellent analytical, problem-solving, communication, and leadership skills. Qualifications Career Level - IC4 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less
Posted 5 days ago
6.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Job Description Oracle Customer Success Services Building on the mindset that "Who knows Oracle …. better than Oracle?" Oracle Customer Success Services assists customers with their requirements for some of the most cutting-edge applications and solutions by utilizing the strengths of more than two decades of expertise in developing mission-critical solutions for enterprise customers and combining it with cutting-edge technology to provide our customers' speed, flexibility, resiliency, and security to enable customers to optimize their investment, minimize risk, and achieve more. The business was established with an entrepreneurial mindset and supports a vibrant, imaginative, and highly varied workplace. We are free of obligations, so we'll need your help to turn it into a premier engineering hub that prioritizes quality. Why? Oracle Customer Success Services Engineering is responsible for designing, building, and managing cutting-edge solutions, services, and core platforms to support the managed cloud business including but not limited to Oracle Cloud Infrastructure (OCI), Oracle Cloud Applications (SaaS) & Oracle Enterprise Applications. This position is for CSS Architecture Team, and we are searching for the finest and brightest technologists as we begin on the road of cloud-native digital transformation. We operate under a garage culture, rely on cutting-edge technology in our daily work, and provide a highly innovative, creative, and experimental work environment. We prefer to innovate and move quickly, putting a strong emphasis on scalability and robustness. We need your assistance to build a top-tier engineering team that has a significant influence. What? As a Principal Data Science & AIML Engineer within the CSS CDO Architecture & Platform team, you’ll lead efforts in designing and building scalable, distributed, resilient services that provide artificial intelligence and machine learning capabilities on OCI & Oracle Cloud Applications for the business. You will be responsible for the design and development of machine learning systems and applications, ensuring they meet the needs of our clients and align with the company's strategic objectives. The ideal candidate will have extensive experience in machine learning algorithms, model creation and evaluation, data engineering and data processing for large scale distributed systems, and software development methodologies. We strongly believe in ownership and challenging the status quo. We expect you to bring critical thinking and long-term design impact while building solutions and products defining system integrations, and cross-cutting concerns. Being part of the architecture function also provides you with the unique ability to enforce new processes and design patterns that will be future-proof while building new services or products. As a thought leader, you will own and lead the complete SDLC from Architecture Design, Development, Test, Operational Readiness, and Platform SRE. Responsibilities As a member of the architecture team, you will be in charge of designing software products, services, and platforms, as well as creating, testing, and managing the systems and applications we create in line with the architecture patterns and standards. As a core member of the Architecture Chapter, you will be expected to advocate for the adoption of software architecture and design patterns among cross-functional teams both within and outside of engineering roles. You will also be expected to act as a mentor and act in capacity as an advisor to the team(s) within the software and AIML domain. As we push for digital transformation throughout the organization, you will constantly be expected to think creatively and optimize and harmonize business processes. Core Responsibilities Lead the development of machine learning models, integration with full stack software ecosystem, data engineering and contribute to the design strategy. Collaborate with product managers and development teams to identify software requirements and define project scopes. Develop and maintain technical documentation, including architecture diagrams, design specifications, and system diagrams. Analyze and recommend new software technologies and platforms to ensure the company stays ahead of the curve. Work with development teams to ensure software projects are delivered on time, within budget, and to the required quality standards. Provide guidance and mentorship to junior developers. Stay up-to-date with industry trends and developments in software architecture and development practices. Required Qualifications Bachelor's or Master's Degree in Computer Science, Machine Learning/AI, or a closely related field. 6+ years of experience in software development, machine learning, data science, and data engineering design. Proven ability to build and manage enterprise-distributed and/or cloud-native systems. Broad knowledge of cutting-edge machine learning models and strong domain expertise in both traditional and deep learning, particularly in areas such as Recommendation Engines, NLP & Transformers, Computer Vision, and Generative AI. Advanced proficiency in Python and frameworks such as FastAPI, Dapr & Flask or equivalent. Deep experience with ML frameworks such as PyTorch, TensorFlow, and Scikit-learn. Hands-on experience building ML models from scratch, transfer learning, and Retrieval Augmented Generation (RAG) using various techniques (Native, Hybrid, C-RAG, Graph RAG, Agentic RAG, and Multi-Agent RAG). Experience building Agentic Systems with SLMs and LLMs using frameworks like Langgraph + Langchain, AutoGen, LlamaIndex, Bedrock, Vertex, Agent Development Kit, Model Context Protocol (MCP)and Haystack or equivalent. Experience in Data Engineering using data lakehouse stacks such as ETL/ELT, and data processing with Apache Hadoop, Spark, Flink, Beam, and dbt. Experience with Data Warehousing and Lakes such as Apache Iceberg, Hudi, Delta Lake, and cloud-managed solutions like OCI Data Lakehouse. Experience in data visualization and analytics with Apache Superset, Apache Zeppelin, Oracle Analytics Cloud or similar. Hands-on experience working with various data types and storage formats, including NoSQL, SQL, Graph databases, and data serialization formats like Parquet and Arrow. Experience with real-time distributed systems using streaming data with Kafka, NiFi, or Pulsar. Strong expertise in software design concepts, patterns (e.g., 12-Factor Apps), and tools to create CNCF-compliant software with hands-on knowledge of containerization technologies like Docker and Kubernetes. Proven ability to build and deploy software applications on one or more public cloud providers (OCI, AWS, Azure, GCP, or similar). Demonstrated ability to write full-stack applications using polyglot programming with languages/frameworks like FastAPI, Python, and Golang. Experience designing API-first systems with application stacks like FARM and MERN, and technologies such as gRPC and REST. Solid understanding of Design Thinking, Test-Driven Development (TDD), BDD, and end-to-end SDLC. Experience in DevOps practices, including Kubernetes, CI/CD, Blue-Green, and Canary deployments. Experience with Microservice architecture patterns, including API Gateways, Event-Driven & Reactive Architecture, CQRS, and SAGA. Familiarity with OOP design principles (SOLID, DRY, KISS, Common Closure, and Module Encapsulation). Proven ability to design software systems using various design patterns (Creational, Structural, and Behavioral). Strong interpersonal skills and the ability to effectively communicate with business stakeholders. Demonstrated ability to drive technology adoption in AIML Solutions and CNCF software stack. Excellent analytical, problem-solving, communication, and leadership skills. Qualifications Career Level - IC4 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less
Posted 5 days ago
6.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Job Description Oracle Customer Success Services Building on the mindset that "Who knows Oracle …. better than Oracle?" Oracle Customer Success Services assists customers with their requirements for some of the most cutting-edge applications and solutions by utilizing the strengths of more than two decades of expertise in developing mission-critical solutions for enterprise customers and combining it with cutting-edge technology to provide our customers' speed, flexibility, resiliency, and security to enable customers to optimize their investment, minimize risk, and achieve more. The business was established with an entrepreneurial mindset and supports a vibrant, imaginative, and highly varied workplace. We are free of obligations, so we'll need your help to turn it into a premier engineering hub that prioritizes quality. Why? Oracle Customer Success Services Engineering is responsible for designing, building, and managing cutting-edge solutions, services, and core platforms to support the managed cloud business including but not limited to Oracle Cloud Infrastructure (OCI), Oracle Cloud Applications (SaaS) & Oracle Enterprise Applications. This position is for CSS Architecture Team, and we are searching for the finest and brightest technologists as we begin on the road of cloud-native digital transformation. We operate under a garage culture, rely on cutting-edge technology in our daily work, and provide a highly innovative, creative, and experimental work environment. We prefer to innovate and move quickly, putting a strong emphasis on scalability and robustness. We need your assistance to build a top-tier engineering team that has a significant influence. What? As a Principal Data Science & AIML Engineer within the CSS CDO Architecture & Platform team, you’ll lead efforts in designing and building scalable, distributed, resilient services that provide artificial intelligence and machine learning capabilities on OCI & Oracle Cloud Applications for the business. You will be responsible for the design and development of machine learning systems and applications, ensuring they meet the needs of our clients and align with the company's strategic objectives. The ideal candidate will have extensive experience in machine learning algorithms, model creation and evaluation, data engineering and data processing for large scale distributed systems, and software development methodologies. We strongly believe in ownership and challenging the status quo. We expect you to bring critical thinking and long-term design impact while building solutions and products defining system integrations, and cross-cutting concerns. Being part of the architecture function also provides you with the unique ability to enforce new processes and design patterns that will be future-proof while building new services or products. As a thought leader, you will own and lead the complete SDLC from Architecture Design, Development, Test, Operational Readiness, and Platform SRE. Responsibilities As a member of the architecture team, you will be in charge of designing software products, services, and platforms, as well as creating, testing, and managing the systems and applications we create in line with the architecture patterns and standards. As a core member of the Architecture Chapter, you will be expected to advocate for the adoption of software architecture and design patterns among cross-functional teams both within and outside of engineering roles. You will also be expected to act as a mentor and act in capacity as an advisor to the team(s) within the software and AIML domain. As we push for digital transformation throughout the organization, you will constantly be expected to think creatively and optimize and harmonize business processes. Core Responsibilities Lead the development of machine learning models, integration with full stack software ecosystem, data engineering and contribute to the design strategy. Collaborate with product managers and development teams to identify software requirements and define project scopes. Develop and maintain technical documentation, including architecture diagrams, design specifications, and system diagrams. Analyze and recommend new software technologies and platforms to ensure the company stays ahead of the curve. Work with development teams to ensure software projects are delivered on time, within budget, and to the required quality standards. Provide guidance and mentorship to junior developers. Stay up-to-date with industry trends and developments in software architecture and development practices. Required Qualifications Bachelor's or Master's Degree in Computer Science, Machine Learning/AI, or a closely related field. 6+ years of experience in software development, machine learning, data science, and data engineering design. Proven ability to build and manage enterprise-distributed and/or cloud-native systems. Broad knowledge of cutting-edge machine learning models and strong domain expertise in both traditional and deep learning, particularly in areas such as Recommendation Engines, NLP & Transformers, Computer Vision, and Generative AI. Advanced proficiency in Python and frameworks such as FastAPI, Dapr & Flask or equivalent. Deep experience with ML frameworks such as PyTorch, TensorFlow, and Scikit-learn. Hands-on experience building ML models from scratch, transfer learning, and Retrieval Augmented Generation (RAG) using various techniques (Native, Hybrid, C-RAG, Graph RAG, Agentic RAG, and Multi-Agent RAG). Experience building Agentic Systems with SLMs and LLMs using frameworks like Langgraph + Langchain, AutoGen, LlamaIndex, Bedrock, Vertex, Agent Development Kit, Model Context Protocol (MCP)and Haystack or equivalent. Experience in Data Engineering using data lakehouse stacks such as ETL/ELT, and data processing with Apache Hadoop, Spark, Flink, Beam, and dbt. Experience with Data Warehousing and Lakes such as Apache Iceberg, Hudi, Delta Lake, and cloud-managed solutions like OCI Data Lakehouse. Experience in data visualization and analytics with Apache Superset, Apache Zeppelin, Oracle Analytics Cloud or similar. Hands-on experience working with various data types and storage formats, including NoSQL, SQL, Graph databases, and data serialization formats like Parquet and Arrow. Experience with real-time distributed systems using streaming data with Kafka, NiFi, or Pulsar. Strong expertise in software design concepts, patterns (e.g., 12-Factor Apps), and tools to create CNCF-compliant software with hands-on knowledge of containerization technologies like Docker and Kubernetes. Proven ability to build and deploy software applications on one or more public cloud providers (OCI, AWS, Azure, GCP, or similar). Demonstrated ability to write full-stack applications using polyglot programming with languages/frameworks like FastAPI, Python, and Golang. Experience designing API-first systems with application stacks like FARM and MERN, and technologies such as gRPC and REST. Solid understanding of Design Thinking, Test-Driven Development (TDD), BDD, and end-to-end SDLC. Experience in DevOps practices, including Kubernetes, CI/CD, Blue-Green, and Canary deployments. Experience with Microservice architecture patterns, including API Gateways, Event-Driven & Reactive Architecture, CQRS, and SAGA. Familiarity with OOP design principles (SOLID, DRY, KISS, Common Closure, and Module Encapsulation). Proven ability to design software systems using various design patterns (Creational, Structural, and Behavioral). Strong interpersonal skills and the ability to effectively communicate with business stakeholders. Demonstrated ability to drive technology adoption in AIML Solutions and CNCF software stack. Excellent analytical, problem-solving, communication, and leadership skills. Qualifications Career Level - IC4 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less
Posted 5 days ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
It's about Being What's next. What's in it for you? A Machine Learning Engineer (Global) will be responsible for working in the Artificial Intelligence team, Linde's AI global corporate division engaged with real business challenges and opportunities in multiple countries. Focus of this role is to support the AI team with extending existing and building new AI products for a vast amount of uses cases across Linde’s business and value chain. You'll collaborate across different business and corporate functions in international team composed of Project Managers, Data Scientists, Data and Software Engineers in the AI team and others in the Linde's Global AI team. At Linde, employees can enjoy a range of benefits that make the workplace comfortable and enjoyable. These include loyalty offers, annual leave, an on-site eatery, employee resource groups, and teams that provide support and foster a sense of community. These benefits demonstrate Linde's commitment to creating a positive work experience for its employees. Every day is an opportunity: an opportunity to learn, to grow, to share success, and to contribute to one of the world’s leading industrial gas and engineering companies. Seize the opportunity: take your next step with us and join our team. Linde values diversity and recognizes the importance of fostering inclusion in our work environment. We believe that our success depends on the diverse perspectives of our employees, customers, and global markets. As an employer of choice, we strive to support our employees' growth, welcome new ideas, and respect our differences. Team Making an impact. What will you do? As a Machine Learning Enginee, you will be working closely with our global AI team to create and implement sustainable AI models, algorithms, and software solutions for our global industrial gases and engineering services operations You will participate in the collection, analysis, interpretation, and output of large amounts of data using advanced AI techniques like deep learning, NLP, and computer vision You will develop, train, test and deploy machine learning models in various fields such as computer vision, LLMs, tabular and time series data Experimenting with novel deep learning based technologies such as self-supervised learning and generative AI will also be part of your tasks You will support in setting up the MLOps infrastructure for existing and new AI products Additionally, you will work directly with customer data and set up data pipelines to collect, curate, transform and version data Winning in your role. Do you have what it takes? You have a Bachelor or master’s degree in data science, Computational Statistics/Mathematics, Computer Science or related field In addition you know the basics of neural networks and have an initial practical experience with deep learning frameworks like PyTorch and Tensorflow Further, you have experience in backend and API development with Python (FastAPI, Flask) or similar You demonstrate knowledge of and basic practical experience with Object Oriented Programming, design patterns, algorithms and data structures, version control systems (e.g. Git) Having basic experience with cloud platforms (e.g. Azure), Docker and Kubernetes is a plus Fluency in English is required Why you will love working for us! Linde is a leading global industrial gases and engineering company, operating in more than 100 countries worldwide. We live our mission of making our world more productive every day by providing high-quality solutions, technologies and services which are making our customers more successful and helping to sustain and protect our planet. On the 1st of April 2020, Linde India Limited and Praxair India Private Limited successfully formed a joint venture, LSAS Services Private Limited. This company will provide Operations and Management (O&M) services to both existing organizations, which will continue to operate separately. LSAS carries forward the commitment towards sustainable development, championed by both legacy organizations. It also takes ahead the tradition of the development of processes and technologies that have revolutionized the industrial gases industry, serving a variety of end markets including chemicals & refining, food & beverage, electronics, healthcare, manufacturing, and primary metals. Whatever you seek to accomplish, and wherever you want those accomplishments to take you, a career at Linde provides limitless ways to achieve your potential, while making a positive impact in the world. Be Linde. Be Limitless. Have we inspired you? Let's talk about it! We are looking forward to receiving your complete application (motivation letter, CV, certificates) via our online job market. Any designations used of course apply to persons of all genders. The form of speech used here is for simplicity only. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, age, disability, protected veteran status, pregnancy, sexual orientation, gender identity or expression, or any other reason prohibited by applicable law. Praxair India Private Limited acts responsibly towards its shareholders, business partners, employees, society and the environment in every one of its business areas, regions and locations across the globe. The company is committed to technologies and products that unite the goals of customer value and sustainable development. Show more Show less
Posted 5 days ago
3.0 years
0 Lacs
India
Remote
Job Title: Senior AI Engineer (Data-Focused) Experience Level: 2–3 Years (Mandatory) About Us At Data-Hat AI, we build cutting-edge AI Agent and Generative AI solutions that solve real business problems across industries. With teams in Silicon Valley, India, and the UK, we specialize in intelligent automation, scalable AI systems, and data-driven innovation. We’re seeking a Senior AI Engineer who thrives on working with data, automating pipelines, and creating reliable backend systems that serve the needs of intelligent agents and enterprise AI applications. Role Overview As a Senior AI Engineer, you will play a key role in developing robust AI and data pipelines that power our core products and solutions. You’ll work closely with AI researchers, data scientists, and product teams to operationalize data and make AI insights reliable and actionable. Key Responsibilities Design and develop ETL pipelines for both structured and unstructured data sources Create and manage data workflows, including scheduled cron jobs and batch/streaming pipelines Implement and optimize data ingestion, transformation, and storage solutions Work with MongoDB, SQL, and other data storage tools for managing enterprise-scale datasets Integrate and manage pub/sub architectures using tools like Kafka and Redis Build and deploy FastAPI-based microservices that serve AI model outputs or support internal tools Collaborate on early-stage Agentic AI projects, contributing to infrastructure and data workflows Write clean, scalable, and well-documented Python code following best practices Must-Have Qualifications 2–3 years of hands-on experience in software development with strong focus on Python Proven experience building data-centric backend systems Deep understanding of ETL pipelines, APIs, and data processing best practices Proficiency in MongoDB, PostgreSQL, or similar databases Exposure to streaming/messaging systems like Kafka, Redis, or RabbitMQ Experience in scheduling and managing automated jobs (e.g., Cron, Airflow, Prefect) Nice to Have Experience with FastAPI or similar Python frameworks Exposure to Agentic AI concepts or frameworks (e.g., LangChain, crewAI, AutoGen) Familiarity with deploying services on AWS, GCP, or Azure Why Join Us? Work with a globally recognized team of AI innovators and engineers Build production-grade AI systems that impact real business outcomes Opportunity to work on next-gen AI Agents and GenAI use cases Competitive compensation, remote flexibility, and a collaborative culture To Apply: Email your resume and any GitHub or project links to hiring@data-hatai.com with subject line: Senior AI Engineer Application Show more Show less
Posted 5 days ago
2.0 years
0 Lacs
Dholera, Gujarat, India
On-site
About The Business - Tata Electronics Private Limited (TEPL) is a greenfield venture of the Tata Group with expertise in manufacturing precision components. Tata Electronics (a wholly owned subsidiary of Tata Sons Pvt. Ltd.) is building India’s first AI-enabled state-of-the-art Semiconductor Foundry. This facility will produce chips for applications such as power management IC, display drivers, microcontrollers (MCU) and high-performance computing logic, addressing the growing demand in markets such as automotive, computing and data storage, wireless communications and artificial intelligence. Tata Electronics is a subsidiary of the Tata group. The Tata Group operates in more than 100 countries across six continents, with the mission 'To improve the quality of life of the communities we serve globally, through long term stakeholder value creation based on leadership with Trust.’ Job Responsibilities - Architect and implement scalable offline data pipelines for manufacturing systems including AMHS, MES, SCADA, PLCs, vision systems, and sensor data. Design and optimize ETL/ELT workflows using Python, Spark, SQL, and orchestration tools (e.g., Airflow) to transform raw data into actionable insights. Lead database design and performance tuning across SQL and NoSQL systems, optimizing schema design, queries, and indexing strategies for manufacturing data. Enforce robust data governance by implementing data quality checks, lineage tracking, access controls, security measures, and retention policies. Optimize storage and processing efficiency through strategic use of formats (Parquet, ORC), compression, partitioning, and indexing for high-performance analytics. Implement streaming data solutions (using Kafka/RabbitMQ) to handle real-time data flows and ensure synchronization across control systems. Building dashboards using analytics tools like Grafana. Good Understanding of Hadoop ecosystem. Develop standardized data models and APIs to ensure consistency across manufacturing systems and enable data consumption by downstream applications. Collaborate cross-functionally with Platform Engineers, Data Scientists, Automation teams, IT Operations, Manufacturing, and Quality departments. Mentor junior engineers while establishing best practices, documentation standards, and fostering a data-driven culture throughout the organization. Essential Attributes - Expertise in Python programming for building robust ETL/ELT pipelines and automating data workflows. Proficiency with Hadoops ecosystem. Hands-on experience with Apache Spark (PySpark) for distributed data processing and large-scale transformations. Strong proficiency in SQL for data extraction, transformation, and performance tuning across structured datasets. Proficient in using Apache Airflow to orchestrate and monitor complex data workflows reliably. Skilled in real-time data streaming using Kafka or RabbitMQ to handle data from manufacturing control systems. Experience with both SQL and NoSQL databases, including PostgreSQL, Timescale DB, and MongoDB, for managing diverse data types. In-depth knowledge of data lake architectures and efficient file formats like Parquet and ORC for high-performance analytics. Proficient in containerization and CI/CD practices using Docker and Jenkins or GitHub Actions for production-grade deployments. Strong understanding of data governance principles, including data quality, lineage tracking, and access control. Ability to design and expose RESTful APIs using FastAPI or Flask to enable standardized and scalable data consumption. Qualifications - BE/ME Degree in Computer science, Electronics, Electrical Desired Experience Level - Masters+ 2 Years of relevant experience. Bachelors+4 Years of relevant experience. Experience with semiconductor industry is a plus Show more Show less
Posted 5 days ago
2.0 years
0 Lacs
Dholera, Gujarat, India
On-site
About The Business - Tata Electronics Private Limited (TEPL) is a greenfield venture of the Tata Group with expertise in manufacturing precision components. Tata Electronics (a wholly owned subsidiary of Tata Sons Pvt. Ltd.) is building India’s first AI-enabled state-of-the-art Semiconductor Foundry. This facility will produce chips for applications such as power management IC, display drivers, microcontrollers (MCU) and high-performance computing logic, addressing the growing demand in markets such as automotive, computing and data storage, wireless communications and artificial intelligence. Tata Electronics is a subsidiary of the Tata group. The Tata Group operates in more than 100 countries across six continents, with the mission 'To improve the quality of life of the communities we serve globally, through long term stakeholder value creation based on leadership with Trust.’ Job Responsibilities - Architect and implement scalable offline data pipelines for manufacturing systems including AMHS, MES, SCADA, PLCs, vision systems, and sensor data. Design and optimize ETL/ELT workflows using Python, Spark, SQL, and orchestration tools (e.g., Airflow) to transform raw data into actionable insights. Lead database design and performance tuning across SQL and NoSQL systems, optimizing schema design, queries, and indexing strategies for manufacturing data. Enforce robust data governance by implementing data quality checks, lineage tracking, access controls, security measures, and retention policies. Optimize storage and processing efficiency through strategic use of formats (Parquet, ORC), compression, partitioning, and indexing for high-performance analytics. Implement streaming data solutions (using Kafka/RabbitMQ) to handle real-time data flows and ensure synchronization across control systems. Building dashboards using analytics tools like Grafana. Good Understanding of Hadoop ecosystem. Develop standardized data models and APIs to ensure consistency across manufacturing systems and enable data consumption by downstream applications. Collaborate cross-functionally with Platform Engineers, Data Scientists, Automation teams, IT Operations, Manufacturing, and Quality departments. Mentor junior engineers while establishing best practices, documentation standards, and fostering a data-driven culture throughout the organization. Essential Attributes - Expertise in Python programming for building robust ETL/ELT pipelines and automating data workflows. Proficiency with Hadoops ecosystem. Hands-on experience with Apache Spark (PySpark) for distributed data processing and large-scale transformations. Strong proficiency in SQL for data extraction, transformation, and performance tuning across structured datasets. Proficient in using Apache Airflow to orchestrate and monitor complex data workflows reliably. Skilled in real-time data streaming using Kafka or RabbitMQ to handle data from manufacturing control systems. Experience with both SQL and NoSQL databases, including PostgreSQL, Timescale DB, and MongoDB, for managing diverse data types. In-depth knowledge of data lake architectures and efficient file formats like Parquet and ORC for high-performance analytics. Proficient in containerization and CI/CD practices using Docker and Jenkins or GitHub Actions for production-grade deployments. Strong understanding of data governance principles, including data quality, lineage tracking, and access control. Ability to design and expose RESTful APIs using FastAPI or Flask to enable standardized and scalable data consumption. Qualifications - BE/ME Degree in Computer science, Electronics, Electrical Desired Experience Level - Masters+ 2 Years of relevant experience. Bachelors+4 Years of relevant experience. Experience with semiconductor industry is a plus Show more Show less
Posted 5 days ago
40.0 years
4 - 8 Lacs
Hyderābād
On-site
India - Hyderabad JOB ID: R-216713 ADDITIONAL LOCATIONS: India - Hyderabad WORK LOCATION TYPE: On Site DATE POSTED: Jun. 12, 2025 CATEGORY: Information Systems ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human data to push beyond what’s known today. ABOUT THE ROLE Let’s do this. Let’s change the world. At Amgen, we believe that innovation can and should be happening across the entire company. Part of the Artificial Intelligence & Data function of the Amgen Technology and Medical Organizations (ATMOS), the AI & Data Innovation Lab (the Lab) is a center for exploration and innovation, focused on integrating and accelerating new technologies and methods that deliver measurable value and competitive advantage. We’ve built algorithms that predict bone fractures in patients who haven’t even been diagnosed with osteoporosis yet. We’ve built software to help us select clinical trial sites so we can get medicines to patients faster. We’ve built AI capabilities to standardize and accelerate the authoring of regulatory documents so we can shorten the drug approval cycle. And that’s just a part of the beginning. Join us! We are seeking a Senior DevOps Software Engineer to join the Lab’s software engineering practice. This role is integral to developing top-tier talent, setting engineering best practices, and evangelizing full-stack development capabilities across the organization. The Senior DevOps Software Engineer will design and implement deployment strategies for AI systems using the AWS stack, ensuring high availability, performance, and scalability of applications. Roles & Responsibilities: Design and implement deployment strategies using the AWS stack, including EKS, ECS, Lambda, SageMaker, and DynamoDB. Configure and manage CI/CD pipelines in GitLab to streamline the deployment process. Develop, deploy, and manage scalable applications on AWS, ensuring they meet high standards for availability and performance. Implement infrastructure-as-code (IaC) to provision and manage cloud resources consistently and reproducibly. Collaborate with AI product design and development teams to ensure seamless integration of AI models into the infrastructure. Monitor and optimize the performance of deployed AI systems, addressing any issues related to scaling, availability, and performance. Lead and develop standards, processes, and best practices for the team across the AI system deployment lifecycle. Stay updated on emerging technologies and best practices in AI infrastructure and AWS services to continuously improve deployment strategies. Familiarity with AI concepts such as traditional AI, generative AI, and agentic AI, with the ability to learn and adopt new skills quickly. Functional Skills: Deep expertise in designing and maintaining CI/CD pipelines and enabling software engineering best practices and overall software product development lifecycle. Ability to implement automated testing, build, deployment, and rollback strategies. Advanced proficiency managing and deploying infrastructure with the AWS cloud platform, including cost planning, tracking and optimization. Proficiency with backend languages and frameworks (Python, FastAPI, Flask preferred). Experience with databases (Postgres/DynamoDB) Experience with microservices architecture and containerization (Docker, Kubernetes). Good-to-Have Skills: Familiarity with enterprise software systems in life sciences or healthcare domains. Familiarity with big data platforms and experience in data pipeline development (Databricks, Spark). Knowledge of data security, privacy regulations, and scalable software solutions. Soft Skills: Excellent communication skills, with the ability to convey complex technical concepts to non-technical stakeholders. Ability to foster a collaborative and innovative work environment. Strong problem-solving abilities and attention to detail. High degree of initiative and self-motivation. Basic Qualifications: Bachelor’s degree in Computer Science, AI, Software Engineering, or related field. 5+ years of experience in full-stack software engineering. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.
Posted 5 days ago
6.0 years
3 - 7 Lacs
Hyderābād
On-site
At CGI, we’re a team of builders. We call our employees members because all who join CGI are building their own company - one that has grown to 72,000 professionals located in 40 countries. Founded in 1976, CGI is a leading IT and business process services firm committed to helping clients succeed. We have the global resources, expertise, stability and dedicated professionals needed to achieve. At CGI, we’re a team of builders. We call our employees members because all who join CGI are building their own company - one that has grown to 72,000 professionals located in 40 countries. Founded in 1976, CGI is a leading IT and business process services firm committed to helping clients succeed. We have the global resources, expertise, stability and dedicated professionals needed to achieve results for our clients - and for our members. Come grow with us. Learn more at www.cgi.com. This is a great opportunity to join a winning team. CGI offers a competitive compensation package with opportunities for growth and professional development. Benefits for full-time, permanent members start on the first day of employment and include a paid time-off program and profit participation and stock purchase plans. We wish to thank all applicants for their interest and effort in applying for this position, however, only candidates selected for interviews will be contacted. No unsolicited agency referrals please. Job Title: Python AWS Developer Position: Senior Software Engineer Experience: 6- 9 Years Category: Software Development/ Engineering Location: Hyderabad , Chennai , Bangalore Employment Type: Full Time Your future duties and responsibilities Over 5 years of experience in API development (FastAPI is a plus) with Python and AWS, including: Knowledge and experience with React.js for frontend development is a plus. Experience with containerization and orchestration utilizing Docker and Kubernetes (EKS/ECS). Hands-on experience with AWS services such as API Gateway, S3, DynamoDB, RDS(PostgreSQL), Elasticache, SQS, SNS, and Lambda. Strong expertise in Infrastructure as Code (IaC) using Terraform. Good experience with SQL and NoSQL(MongoDB/MongoDB Atlas). Understanding of event-driven and microservices architecture. Familiarity with DevOps practices and CI/CD pipelines. Robust problem-solving and debugging skills. Excellent communication and collaboration skills, effectively engaging with team members and stakeholders. Experience with Agile methodologies. Required qualifications to be successful in this role Years of experience : 5+ Relevant experience : 3+ Locations : Hyderabad ,Bangalore , Chennai. Eductaion : BTech ,MTech ,BSC Notice : Immediate to 30days - Serving Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world.
Posted 5 days ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Overview: We are seeking a motivated and skilled AI Developer with 1–3 years of experience to join our growing tech team. The ideal candidate will have a strong understanding of machine learning algorithms, natural language processing (NLP), and hands-on experience in building and deploying AI models. You will work on innovative projects that leverage AI to solve real-world problems. Key Responsibilities: Develop, train, and optimize machine learning and deep learning models. Work with large datasets, preprocess and clean data for analysis and modeling. Implement NLP pipelines, recommendation systems, or computer vision models as required. Collaborate with data engineers, product managers, and developers to integrate AI solutions into production. Continuously evaluate and improve model performance. Stay updated with the latest advancements in AI and ML technologies. Required Skills & Qualifications: Bachelor’s or Master’s degree in Computer Science, AI, Data Science, or a related field. 1–3 years of hands-on experience in AI/ML development. Proficiency in Python and libraries such as TensorFlow, PyTorch, Scikit-learn, OpenCV, or Hugging Face. Strong understanding of machine learning concepts and statistical modeling. Familiarity with NLP, computer vision, or generative AI tools (e.g., LangChain, OpenAI API) is a plus. Experience with cloud platforms like AWS, GCP, or Azure is preferred. Good problem-solving skills and ability to work in a collaborative team environment. Nice to Have: Experience in deploying models using Flask, FastAPI, or Streamlit. Exposure to MLOps tools and practices. Knowledge of data visualization tools and techniques. Show more Show less
Posted 5 days ago
1.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Scope We are a leading AI-driven Global Supply Chain Solutions Software Product Company, recognized by Glassdoor as one of the “Best Places To Work.” The team utilizes open-source technologies and cloud-native tools to deliver scalable, flexible, and high-performing solutions. The global team includes 80+ professionals across software development, QA, and Agile functions. Core Technologies Software: Python, GIT, Rest API, OAuth Application Architecture: Scalable, Resilient, event driven, secure multi-tenant Microservices architecture Cloud Platforms: Microsoft Azure Tooling/Frameworks: Git, Kubernetes, Kafka, Elasticsearch, NoSQL, RDBMS What You’ll Do Understands and analyze business requirements and assists in design for accuracy and completeness. Develops and maintains relevant product. Demonstrates good understanding of the product and owns one or more modules What We’re Looking For BE/B. Tech or ME/M. Tech or MCA with 1 to 3 years of experience in Software Development of large and complex enterprise applications. Experience in developing enterprise application using Python (Django, Flask, FastAPI preferred), GIT, Rest API. Develops and maintains relevant product and domain knowledge Develops and executes Unit Tests Follows standard processes and procedures Identifies reusable components Ensures that the code is delivered for Integration Build and Test which includes the release content Identifies and resolves software bugs Tests and integrates with other development tasks Adheres to the performance benchmark based on pre-defined requirements Possesses knowledge of database architecture and data models used in the relevant product. Plans and prioritizes work Proactively reports all activities to the reporting managers Proactively seeks assistance as required Provide assistance or guidance to the new members of the team Demonstrates problem solving and innovation ability. Participates in company technical evaluations and coding challenges. Our Values If you want to know the heart of a company, take a look at their values. Ours unite us. They are what drive our success – and the success of our customers. Does your heart beat like ours? Find out here: Core Values All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. Show more Show less
Posted 5 days ago
5.0 - 6.0 years
5 - 10 Lacs
India
On-site
Job Summary: We are seeking a highly skilled Python Developer to join our team. Key Responsibilities: Design, develop, and deploy Python applications Work independently on machine learning model development, evaluation, and optimization. Implement scalable and efficient algorithms for predictive analytics and automation. Optimize code for performance, scalability, and maintainability. Collaborate with stakeholders to understand business requirements and translate them into technical solutions. Integrate APIs and third-party tools to enhance functionality. Document processes, code, and best practices for maintainability. Required Skills & Qualifications: 5-6 years of professional experience in Python application development. Proficiency in Python libraries such as Pandas, NumPy, SciPy, and Matplotlib. Experience with SQL and NoSQL databases (PostgreSQL, MongoDB, etc.). Hands-on experience with big data technologies (Apache Spark, Delta Lake, Hadoop, etc.). Strong experience in developing APIs and microservices using FastAPI, Flask, or Django. Good understanding of data structures, algorithms, and software development best practices. Strong problem-solving and debugging skills. Ability to work independently and handle multiple projects simultaneously. Good to have - Working knowledge of cloud platforms (Azure/AWS/GCP) for deploying ML models and data applications. Job Type: Full-time Pay: ₹500,000.00 - ₹1,000,000.00 per year Schedule: Fixed shift Work Location: In person Application Deadline: 30/06/2025 Expected Start Date: 01/07/2025
Posted 5 days ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
The AI Engineer will design, develop, and implement AI models and systems, focusing on turning data into actionable insights through advanced machine learning (ML) and AI techniques. This role involves working with both structured and unstructured data, collaborating with cross-functional teams, and deploying AI solutions in real-world applications. Key responsibilities include hands-on development of generative AI applications and machine learning models such as classification, regression, clustering, and natural language processing (NLP) to generate actionable insights. The AI Engineer will collaborate with cross-functional teams to ensure that AI solutions align with business needs. Working in an agile development environment, the engineer will contribute directly to production-ready AI applications. Staying up-to-date with advancements in AI and integrating relevant innovations into ongoing projects will be essential. The role also involves continuously monitoring and optimizing AI models to improve accuracy, scalability, reliability, and maintainability. Documenting processes, models, and key learnings is expected, along with contributing to the development of internal AI capabilities. The engineer must also ensure that AI models adhere to ethical standards, privacy regulations, and fairness guidelines. Qualifications for this role include a bachelors degree in operations research, mathematics, computer science, or a related quantitative field. Candidates should have advanced expertise in Python and familiarity with database systems such as SQL, NoSQL, and Graph databases. Proficiency with generative AI tools, particularly AWS Bedrock, is required, as is a comprehensive understanding of agentic systems and workflows. The ideal candidate will have experience productionizing GenAI services and working with complex datasets. Advanced experience in developing APIs using Python (e.g., with FastAPI) and deploying them to production is expected. Familiarity with CI/CD pipelines, a strong foundation in software development, and a deep understanding of algorithms, optimization, and scaling are also required. Excellent communication and business analysis skills are critical to success in this role. While not mandatory, experience with cloud engineering, particularly using AWS cloud services, is considered a valuable asset. Show more Show less
Posted 5 days ago
2.0 years
3 Lacs
Coimbatore
On-site
Technical Expertise : (minimum 2 year relevant experience) ● Solid understanding of Generative AI models and Natural Language Processing (NLP) techniques, including Retrieval-Augmented Generation (RAG) systems, text generation, and embedding models. ● Exposure to Agentic AI concepts, multi-agent systems, and agent development using open-source frameworks like LangGraph and LangChain. ● Hands-on experience with modality-specific encoder models (text, image, audio) for multi-modal AI applications. ● Proficient in model fine-tuning, prompt engineering, using both open-source and proprietary LLMs. ● Experience with model quantization, optimization, and conversion techniques (FP32 to INT8, ONNX, TorchScript) for efficient deployment, including edge devices. ● Deep understanding of inference pipelines, batch processing, and real-time AI deployment on both CPU and GPU. ● Strong MLOps knowledge with experience in version control, reproducible pipelines, continuous training, and model monitoring using tools like MLflow, DVC, and Kubeflow. ● Practical experience with scikit-learn, TensorFlow, and PyTorch for experimentation and production-ready AI solutions. ● Familiarity with data preprocessing, standardization, and knowledge graphs (nice to have). ● Strong analytical mindset with a passion for building robust, scalable AI solutions. ● Skilled in Python, writing clean, modular, and efficient code. ● Proficient in RESTful API development using Flask, FastAPI, etc., with integrated AI/ML inference logic. ● Experience with MySQL, MongoDB, and vector databases like FAISS, Pinecone, or Weaviate for semantic search. ● Exposure to Neo4j and graph databases for relationship-driven insights. ● Hands-on with Docker and containerization to build scalable, reproducible, and portable AI services. ● Up-to-date with the latest in GenAI, LLMs, Agentic AI, and deployment strategies. ● Strong communication and collaboration skills, able to contribute in cross-functional and fast-paced environments. Bonus Skills ● Experience with cloud deployments on AWS, GCP, or Azure, including model deployment and model inferencing. ● Working knowledge of Computer Vision and real-time analytics using OpenCV, YOLO, and similar Job Type: Full-time Pay: From ₹300,000.00 per year Schedule: Day shift Experience: AI Engineer: 1 year (Required) Work Location: In person Expected Start Date: 23/06/2025
Posted 5 days ago
5.0 years
7 - 15 Lacs
Indore
Remote
Senior Python Backend Developer. Job Location: Indore. Work mode: Hybrid work. Experience: 5+ Years. Working time: Full-time We are looking for a talented Senior Python Developer to join our team. If you have experience working on a variety of Python-based projects and want to develop your skills in a dynamic environment, this role is for you. We offer the opportunity to work on innovative solutions that have a real impact on users, in a hybrid work mode and full-time capacity. Requirements : ● Python: Django, Django REST Framework, Fastapi (at least 5 years of experience with Python/Django projects). ● Docker ● Git ● PostgreSQL ● Fast learning ● Basics of Linux ● Nice to know: GitLab CI/CD; basics of Kubernetes Why join us ● Grow your skills – our projects are real-world solutions, not just theoretical exercises, helping users in their everyday lives. ● Flexible work – the hybrid model allows for a great balance between work and personal life. ● Friendly atmosphere – join a team that values both the quality of code and mutual support. ● Relaxed atmosphere – minimal bureaucracy and maximum creativity. Whether you work remotely or in the office, there’s always time for growth and building good relationships with the team. Job Types: Full-time, Permanent Pay: ₹700,000.00 - ₹1,500,000.00 per year Benefits: Health insurance Paid sick time Paid time off Provident Fund Work from home Schedule: Day shift Monday to Friday Application Question(s): What is your Notice Period? What is your Current CTC? Experience: Python: 5 years (Required) Django: 5 years (Required) Fast Api: 5 years (Required) Docker, Git,Postgresql: 5 years (Required) Location: Indore, Madhya Pradesh (Required) Work Location: In person
Posted 5 days ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description: Join our pioneering team at BuzzBoard, a recognized first-mover in enterprise generative AI, as a Team Lead of Generative AI and LLM. We've already deployed production GenAI systems generating thousands of posts and content pieces monthly across our ecosystem. You'll lead the charge in scaling our mature AI infrastructure while architecting next-generation applications. As a key leader in our Product Team, you'll orchestrate collaboration between Data Engineering, Software Engineering, and AI Operations teams while expanding our proven generative AI innovations. Key Responsibilities: Strategic Leadership & Vision Lead and mentor a team of GenAI engineers and researchers within our mature AI ecosystem Scale our proven production AI systems generating thousands of content pieces monthly across multiple business verticals Define generative AI strategy and roadmap building upon our established first-mover advantage Drive adoption of multimodal AI systems incorporating vision, audio, and text capabilities Contribute and own the GenAI governance frameworks and best practices Advanced AI Development Architect and scale sophisticated generative AI systems building upon our established multi-LLM infrastructure (GPT-4o, Claude Sonnet, Gemini, O1) Optimize our proven tech stack including LangChain, CrewAI, LangGraph, and vector databases (Chroma) Implement advanced RAG, fine-tuning, and prompt engineering across our existing model inventory Lead development of next-generation agentic AI systems with tool use, reasoning capabilities, and autonomous decision-making Enhance our multi-agent orchestration platforms and complex workflow automation Technology Leadership Drive adoption of agentic AI frameworks including LangGraph, CrewAI, AutoGen, and Microsoft Semantic Kernel Lead implementation of multi-agent orchestration platforms and autonomous decision-making systems Oversee vector databases (Pinecone, Weaviate, Chroma) and semantic search systems Lead integration of AI observability and monitoring tools (LangSmith, Weights & Biases, MLflow) Champion AI development platforms and low-code/no-code solutions Enterprise Integration & Scaling Scale our mature AI infrastructure supporting thousands of monthly content generations across multiple business verticals Enhance our proven MLOps and LLMOps practices for continuous model deployment and performance monitoring Drive technical collaboration for seamless AI integration into established production systems Optimize edge AI capabilities and multi-environment deployment strategies Enhance our performance tracking framework including regression testing, edit ratio tracking, and analytics integration Skills and Qualifications: Core Technical Expertise Deep expertise in generative AI systems, large language models, and transformer architectures with proven production experience Expert proficiency in our established tech stack: LangChain, CrewAI, LangGraph, AutoGen, and multi-LLM orchestration Hands-on experience with our model ecosystem: OpenAI GPT, Anthropic, Google, and vector databases Extensive experience with agentic AI frameworks and autonomous workflow orchestration in production environments Hands-on with programming skills in Python with experience in FastAPI, Streamlit, and modern web frameworks Expert-level understanding of prompt engineering, fine-tuning techniques, and model optimization at scale AI Architecture & Operations Experience with vector databases, embedding models, and semantic search implementations Proficiency in containerization (Docker, Kubernetes) and cloud-native AI deployments Knowledge of AI model serving platforms (vLLM, TensorRT-LLM, Ollama) and inference optimization Understanding of AI safety, alignment, and responsible AI development practices Technical Leadership (Individual Contributor Focus) Experience providing technical guidance to engineering teams in fast-paced environments Experience with AI product development lifecycle and technical go-to-market strategies Strong technical communication and ability to explain AI concepts to technical and non-technical audiences Knowledge of AI regulation landscape and compliance requirements Modern AI Ecosystem Familiarity with AI agent frameworks (LangGraph, CrewAI, Microsoft Semantic Kernel) - REQUIRED Experience with compound AI systems and multi-step reasoning architectures - REQUIRED Experience with multimodal AI systems and computer vision integration Understanding of federated learning and privacy-preserving AI techniques Knowledge of AI model evaluation frameworks and benchmarking methodologies Advanced Qualifications: Agentic AI Mastery (Required) Extensive hands-on experience building and deploying agentic AI systems in production environments Deep understanding of tool-calling, function-calling, and API integration within agent workflows Proven track record with multi-agent collaboration patterns and complex reasoning chains Expertise in agent memory systems, context management, and state persistence across interactions Industry Integration Experience scaling production GenAI systems with proven track record of generating thousands of content pieces monthly Knowledge of multi-LLM orchestration and model switching strategies for optimal performance and cost efficiency Understanding of content generation workflows across social media, marketing, and business communications Familiarity with performance monitoring frameworks including regression testing and analytics dashboard integration Research & Innovation Experience with AI model interpretability and explainable AI techniques Knowledge of quantum-classical hybrid AI approaches and emerging paradigms Technical Excellence Advanced degree in Computer Science, AI, or related field (preferred) Experience building and scaling AI teams in fast-paced environments (preferred) Experience with AI ethics committees and responsible AI governance (preferred) Proven ability to drive digital transformation through AI adoption (preferred) Lead the future of GenAI innovation at BuzzBoard, where your expertise will build upon our established success in production generative AI systems. Join a first-mover organization that has already proven the enterprise value of GenAI at scale, generating thousands of content pieces monthly. Your role as Team Lead of Generative AI and LLM will position you to expand our proven AI ecosystem while defining the next generation of agentic AI solutions that drive measurable business impact. Powered by JazzHR 12dkkTUOj7 Show more Show less
Posted 5 days ago
1.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Join MSBC as an AI ML Engineer – Build AI-Powered Solutions At MSBC, we develop cutting-edge AI solutions that drive real-world impact. As a Junior AI Engineer, you will work on Computer Vision and Natural Language Processing (NLP) projects, supporting our AI team in developing and fine-tuning models. This role is perfect for those who have worked on AI projects—whether through college, hackathons, or personal research—and are eager to expand their expertise in a collaborative environment. Key Tools and Frameworks Deep Learning Frameworks: PyTorch, TensorFlow NLP & CV Libraries: Hugging Face, OpenCV, Transformers Languages: Python (FastAPI/Flask) Version Control: Git Key Responsibilities Assist in building and improving AI models for Computer Vision and NLP applications . Work with deep learning frameworks (PyTorch, TensorFlow) to implement, train, and evaluate models. Explore pre-trained models and fine-tuning techniques to enhance AI performance (bonus if you have prior experience). Collaborate with the AI team to process datasets and optimize model efficiency. Write clean, well-documented code and contribute to AI research discussions. Stay updated with the latest AI advancements and experiment with new techniques. Required Skills and Qualifications 1 year of experience in AI/ML (college projects, hackathons, or research count). Familiarity with Computer Vision or NLP techniques and relevant libraries ( Hugging Face, OpenCV, etc. ). Experience with deep learning frameworks (PyTorch, TensorFlow) . Strong programming skills in Python . Enthusiasm for AI and a willingness to learn and experiment . Good problem-solving and analytical skills. Nice to Have but not Mandatory: Experience with fine-tuning models Experience with Frontend Development frameworks like React and Next JS. Experience in authoring and implementing academic papers. MSBC Group has been a trusted technology partner for over 20 years, delivering innovative AI solutions across various industries. As a Junior AI Engineer, you will work on real AI applications, collaborate with experienced professionals, and gain hands-on experience in AI model development. Join us and start your journey towards building the future of AI with MSBC! Show more Show less
Posted 5 days ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Responsibilities ● Align AI initiatives with organizational goals and collaborate with cross-functional teams. ● Accountable for the development of custom AI models ● Utilize open source Machine Learning and AI libraries for architecting and developing advanced AI models. ● Manage data analytics and machine learning for monitoring, diagnosis, and predictive maintenance in the edtech industry. ● Design, develop, and research Machine Learning systems and models, ensuring high performance and reliability. ● Lead the selection of appropriate data sets, perform statistical analysis, and identify patterns in data distribution impacting real-world model performance. ● Verify and ensure data quality through effective data cleaning processes. ● Play a pivotal role in enriching existing ML frameworks and libraries, staying abreast of industry trends. Requirements: ● Bachelor's degree in Computer Science or a related field (or equivalent work experience). ● 5+ years of hands-on development experience in building custom data models for AI and ML-based products in a tech startup environment. ● 3+ years of experience in Python, NLP, and FastAPI. ● Proven expertise in GPT (Gen AI) with at least 1+ year of experience in TTS (Text-to-Speech / Text-to-video) ● Leadership Experience: Demonstrated leadership and people management skills, guiding teams to successful project outcomes. ● Familiarity with Docker, AWS, Kubernetes, and CI/CD pipeline is a plus. Show more Show less
Posted 5 days ago
0 years
0 Lacs
India
Remote
Techolution is looking for Python developer with Microservices development experience to work on critical applications. You will work closely with a cross-functional team of engineers on microservices and event-driven architectures. You will contribute to the architecture, design and development of new features, identify technical risks and find alternate solutions to various problems. Title : Sr Python Developer - Employement Type : Freelancer , 9 hours per day from 2:00 PM to 11:00 PM IST Location: Remote work Responsibilities:- Design, develop, and maintain RESTful APIs using FastAPI . Build and manage asynchronous tasks and workflows using Celery . Implement message queues and pub/sub patterns using RabbitMQ and Redis . Design and maintain scalable microservices-based architectures with independent deployable units. Work with PostgreSQL to design and optimize relational data models and queries. Containerize and deploy applications using Docker in production-ready environments. Collaborate with DevOps for CI/CD pipelines and production deployments. Ensure code quality, reliability, and maintainability through unit testing and code reviews. Participate in design discussions, code reviews, and agile development processes. Mandatory Skills: Python Celery Redis RabbitMQ FastAPI Postgres Docker About Techolution : Techolution is a next gen Consulting firm on track to become one of the most admired brands in the world for "innovation done right". Our purpose is to harness our expertise in novel technologies to deliver more profits for our enterprise clients while helping them deliver a better human experience for the communities they serve. At Techolution, we build custom AI solutions that produce revolutionary outcomes for enterprises worldwide. Specializing in "AI Done Right," we leverage our expertise and proprietary IP to transform operations and help achieve business goals efficiently. We are honored to have recently received the prestigious Inc 500 Best In Business award , a testament to our commitment to excellence. We were also awarded - AI Solution Provider of the Year by The AI Summit 2023, Platinum sponsor at Advantage DoD 2024 Symposium and a lot more exciting stuff! While we are big enough to be trusted by some of the greatest brands in the world, we are small enough to care about delivering meaningful ROI-generating innovation at a guaranteed price for each client that we serve. Our thought leader, Luv Tulsidas, wrote and published a book in collaboration with Forbes, “Failing Fast? Secrets to succeed fast with AI”. Refer here for more details on the content - https://www.luvtulsidas.com/ Let's explore further! Uncover our unique AI accelerators with us: 1. Enterprise LLM Studio : Our no-code DIY AI studio for enterprises. Choose an LLM, connect it to your data, and create an expert-level agent in 20 minutes. 2. AppMod. AI : Modernizes ancient tech stacks quickly, achieving over 80% autonomy for major brands! 3. ComputerVision. AI : Our ComputerVision. AI Offers customizable Computer Vision and Audio AI models, plus DIY tools and a Real-Time Co-Pilot for human-AI collaboration! 4. Robotics and Edge Device Fabrication : Provides comprehensive robotics, hardware fabrication, and AI-integrated edge design services. 5. RLEF AI Platform : Our proven Reinforcement Learning with Expert Feedback (RLEF) approach bridges Lab-Grade AI to Real-World AI. 6. AI Center of Excellence : Establishes an AI Center of Excellence to maximize AI potential and ROI. Some videos you wanna watch! Computer Vision demo at The AI Summit New York 2023 Life at Techolution GoogleNext 2023 Ai4 - Artificial Intelligence Conferences 2023 WaWa - Solving Food Wastage Saving lives - Brooklyn Hospital Innovation Done Right on Google Cloud Techolution featured on Worldwide Business with KathyIreland Techolution presented by ION World’s Greatest Visit us @ www.techolution.com : To know more about our revolutionary core practices and getting to know in detail about how we enrich the human experience with technology. Show more Show less
Posted 5 days ago
3.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Experience : 3.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Office (Indore) Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - vEngageai) What do you need for this opportunity? Must have skills required: Python, Version Control (Git), CI and CD, Mongo DB, SQL, NO SQL vEngageai is Looking for: Job Title: Software Engineer – Backend (Python) Location: Indore Employment Type: Full-Time Experience Level: Mid-Level (3–4 years) About The Role We are looking for a skilled and motivated Software Engineer – Backend (Python) to join our growing engineering team. In this role, you will be responsible for developing robust, scalable, and secure backend services. You will work closely with cross-functional teams to design, implement, and optimize systems that support critical product functionality. This role offers the opportunity to grow technically while contributing to impactful projects within a collaborative and fast-paced environment. Key Responsibilities Backend Development: Design, develop, and maintain clean and efficient server-side code using Python. API Development: Build and maintain RESTful APIs for internal and external use. Database Management: Design and optimize database schemas (SQL and NoSQL), ensuring data integrity and performance. Code Quality: Write reusable, testable, and efficient code; participate in peer code reviews. Performance Optimization: Identify bottlenecks and bugs, and devise solutions to maintain performance and scalability. Collaboration: Work closely with frontend developers, QA, DevOps, and product teams to deliver integrated features. Version Control & CI/CD: Collaborate using Git-based workflows and contribute to continuous integration pipelines. Required Skills And Qualifications 3–4 years of hands-on experience with Python in a backend development role. Experience with frameworks such as Django, Flask, or FastAPI. Proficiency in working with relational databases (e.g., PostgreSQL, MySQL) and basic understanding of NoSQL solutions. Solid understanding of RESTful API design and integration. Experience with Docker and basic knowledge of containerized development workflows. Familiarity with version control systems like Git and collaborative development practices. Good understanding of data structures, algorithms, and software design principles. Strong debugging and troubleshooting skills. Nice to Have Exposure to cloud platforms (AWS, GCP, or Azure). Basic knowledge of Kubernetes or other container orchestration tools. Familiarity with CI/CD tools (e.g., Jenkins, GitHub Actions). Understanding of asynchronous programming and event-driven systems. Exposure to frontend technologies like React or Vue.js is a plus. Why Join Us? Opportunity to work on modern backend technologies and scalable systems. Collaborative and mentorship-driven culture. Clear path for career progression and technical growth. Work with a team that values clean code, testing, and automation. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 5 days ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
We are seeking a Full Stack Python Developer with strong experience in both frontend and backend development, and deep familiarity with Azure Cloud and serverless architecture. In this full-time role, you’ll build modern, scalable web applications and services using Python, JavaScript frameworks, and Azure-native tools. You’ll work cross-functionally to develop secure, performant, and user-friendly applications that run entirely in the cloud. Job Description: Responsibilities : Develop end-to-end web applications using Python for the backend and React / JavaScript for the frontend. Design, build, and deploy serverless applications on Microsoft Azure using services like: Azure Functions Azure API Management Azure Blob Storage Azure Cosmos DB / Mongo DB Strong experience with using Python Runtime inside Azure Functions, and building serverless functions using the Python v2 programming model and Azure Blueprints. Use Blueprints to define and register new Azure Functions Use Python Modules and an Object-Oriented Programming model to modularize function definition and implementation Build and maintain RESTful APIs, microservices, and integrations with third-party services. Work closely with designers, PMs, and QA to deliver high-quality, user-centric applications. Optimize applications for performance, scalability, and cost-efficiency on Azure. Implement DevOps practices using CI/CD pipelines. Write clean, modular, and well-documented code, following best practices and secure coding guidelines. Participate in sprint planning, code reviews, and agile ceremonies. Required Skills (Must Have): 3–5 years of professional experience in full stack development. Strong proficiency in Object-Orriented Python, with frameworks like FastAPI, Flask, or Django. Solid experience with frontend frameworks such as React.js, or similar. Proven experience with Azure Serverless Architecture, including: Azure Functions Azure API Management Azure Storage & Cosmos DB Understanding of event-driven architecture, and asynchronous APIs in Azure. Experience working with Azure Serverless functions including Durable Functions within Azure Experience with API integrations, secure data handling, and cloud-native development. Proficient in working with Git, Agile methodologies, and software development best practices. Ability to design and develop scalable and efficient applications. Excellent problem-solving and analytical skills. Strong communication and teamwork abilities. Preferred Skills (Good to Have): Experience with Azure App Service, Azure Key Vault, Application Insights, and Azure Monitor for observability and secure deployments. Familiarity with authentication and authorization mechanisms, such as Azure Active Directory (Azure AD), OAuth2, and JWT. Exposure to containerization technologies including Docker, Azure Container Registry (ACR), and Azure Kubernetes Service (AKS). Understanding of cost optimization, resilience, and security best practices in cloud-native and serverless applications. Knowledge of integration with Azure OpenAI service and working with LLM models inside Azure apps Knowledge of LLM frameworks such as LangChain, LlamaIndex, and experience in building intelligent solutions using AI agents and orchestration frameworks. Awareness of modern AI application architecture, including Retrieval-Augmented Generation (RAG) and semantic search. Qualifications : Bachelor’s degree in computer science, Computer Engineering, or a related field. 3+ years of experience in software development. Strong understanding of building cloud-native applications in a serverless ecosystem. Strong understanding of software development methodologies (e.g., Agile). Location: DGS India - Pune - Kharadi EON Free Zone Brand: Dentsu Creative Time Type: Full time Contract Type: Permanent Show more Show less
Posted 5 days ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
We are seeking a Full Stack Python Developer with strong experience in both frontend and backend development, and deep familiarity with Azure Cloud and serverless architecture. In this full-time role, you’ll build modern, scalable web applications and services using Python, JavaScript frameworks, and Azure-native tools. You’ll work cross-functionally to develop secure, performant, and user-friendly applications that run entirely in the cloud. Job Description: Responsibilities : Develop end-to-end web applications using Python for the backend and React / JavaScript for the frontend. Design, build, and deploy serverless applications on Microsoft Azure using services like: Azure Functions Azure API Management Azure Blob Storage Azure Cosmos DB / Mongo DB Strong experience with using Python Runtime inside Azure Functions, and building serverless functions using the Python v2 programming model and Azure Blueprints. Use Blueprints to define and register new Azure Functions Use Python Modules and an Object-Oriented Programming model to modularize function definition and implementation Build and maintain RESTful APIs, microservices, and integrations with third-party services. Work closely with designers, PMs, and QA to deliver high-quality, user-centric applications. Optimize applications for performance, scalability, and cost-efficiency on Azure. Implement DevOps practices using CI/CD pipelines. Write clean, modular, and well-documented code, following best practices and secure coding guidelines. Participate in sprint planning, code reviews, and agile ceremonies. Required Skills (Must Have): 3–5 years of professional experience in full stack development. Strong proficiency in Object-Orriented Python, with frameworks like FastAPI, Flask, or Django. Solid experience with frontend frameworks such as React.js, or similar. Proven experience with Azure Serverless Architecture, including: Azure Functions Azure API Management Azure Storage & Cosmos DB Understanding of event-driven architecture, and asynchronous APIs in Azure. Experience working with Azure Serverless functions including Durable Functions within Azure Experience with API integrations, secure data handling, and cloud-native development. Proficient in working with Git, Agile methodologies, and software development best practices. Ability to design and develop scalable and efficient applications. Excellent problem-solving and analytical skills. Strong communication and teamwork abilities. Preferred Skills (Good to Have): Experience with Azure App Service, Azure Key Vault, Application Insights, and Azure Monitor for observability and secure deployments. Familiarity with authentication and authorization mechanisms, such as Azure Active Directory (Azure AD), OAuth2, and JWT. Exposure to containerization technologies including Docker, Azure Container Registry (ACR), and Azure Kubernetes Service (AKS). Understanding of cost optimization, resilience, and security best practices in cloud-native and serverless applications. Knowledge of integration with Azure OpenAI service and working with LLM models inside Azure apps Knowledge of LLM frameworks such as LangChain, LlamaIndex, and experience in building intelligent solutions using AI agents and orchestration frameworks. Awareness of modern AI application architecture, including Retrieval-Augmented Generation (RAG) and semantic search. Qualifications : Bachelor’s degree in computer science, Computer Engineering, or a related field. 3+ years of experience in software development. Strong understanding of building cloud-native applications in a serverless ecosystem. Strong understanding of software development methodologies (e.g., Agile). Location: DGS India - Pune - Kharadi EON Free Zone Brand: Dentsu Creative Time Type: Full time Contract Type: Permanent Show more Show less
Posted 5 days ago
0 years
0 Lacs
New Delhi, Delhi, India
On-site
🚀 We’re Hiring: Electronics & Communication Engineers Focus: RF Data Analytics · Radar Signal Processing · Electronic Warfare | Experience: 1 – 5 yrs | Age Limit: ≤ 30 yrs Why join Crimson? Work on next-generation radar and EW programs that safeguard critical national assets. Turn terabytes of raw I/Q captures into real-time intelligence alongside cross-functional experts. Ship your code from lab prototype to live field deployment and see immediate impact. What you’ll do Acquire – Automate high-throughput downloads, cataloguing and integrity checks of multi-gigabyte RF datasets. Clean & Sanitize – Write Python/Matlab routines for noise filtering, interference rejection and metadata standardisation. Transform – Build DSP modules to demodulate, resample and convert raw I/Q streams into emitter-level feature vectors. Ingest – Design robust ETL workflows into local and shared SQL/NoSQL databases with geospatial indexing. Analyse – Produce geospatial heat-maps, time-frequency plots and anomaly alerts that drive mission decisions. Present – Craft dashboards and concise reports that translate complex RF metrics into clear operational insight. Maintain – Handle routine calibration of RF front-ends, firmware upgrades and Linux/GPU server upkeep. Must-have qualifications Degree: M.Tech / ME / B.Tech / BE / M.Sc. in ECE, Telecom, Signal Processing, Radar Tech, Defence Electronics, or MCA with strong tech focus. Experience: 1 – 5 yrs hands-on with electronics, communications or signal-processing systems. Core knowledge: Electronic Support Measures (ESM), radar theory, communication waveforms, RF chain components. Tools: Matlab (or equivalent), Python (NumPy, SciPy, Pandas, PyTorch/SciKit-DSP-Comm), Git, Docker, Linux. Data skills: Building ETL pipelines, designing database schemas and basic DevOps practices. Nice-to-have superpowers GNU Radio and SDRs (USRP, HackRF) or Keysight/NI test equipment. REST API development with FastAPI or Flask. Geospatial tooling (GDAL, PostGIS, QGIS, ArcGIS). Familiarity with MIL-STD metadata formats (ST 0601/0603, ASTERIX) and radar messaging. Defence-sector clearance eligibility and a passion for national-security tech. What we offer Mission impact: Direct contribution to nationally strategic programmes with tangible outcomes. Growth runway: Sponsored certifications (DSP, EW, cloud), conference travel and mentoring from senior defence scientists. Cutting-edge lab: Petabyte-scale RF archive, GPU clusters and dedicated SDR testbeds. Competitive package: Market-aligned salary, performance bonus, medical & accident insurance, 30 days paid leave. How to apply Prepare your CV (PDF) and a one-page cover letter describing an RF or large-scale data-pipeline project you’ve handled. Deadline: 11 June 2025 (rolling reviews — apply early for priority). Show more Show less
Posted 5 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
FastAPI is a modern web framework for building APIs with Python that is gaining popularity in the tech industry. If you are a job seeker looking to explore opportunities in the fastapi domain in India, you're in the right place. This article will provide you with insights into the fastapi job market in India, including top hiring locations, salary ranges, career progression, related skills, and interview questions.
The salary range for fastapi professionals in India varies based on experience levels. Entry-level positions can expect a salary range of INR 4-6 lakhs per annum, while experienced professionals can earn anywhere from INR 10-20 lakhs per annum.
In the fastapi domain, a career typically progresses as follows: - Junior Developer - Mid-level Developer - Senior Developer - Tech Lead
Besides proficiency in FastAPI, other skills that are often expected or helpful alongside FastAPI include: - Python programming - RESTful APIs - Database management (SQL or NoSQL) - Frontend technologies like HTML, CSS, and JavaScript
As you explore opportunities in the fastapi job market in India, remember to prepare thoroughly and apply confidently. With the right skills and knowledge, you can excel in your career as a FastAPI professional. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2