Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
2.0 years
25 - 40 Lacs
Thane, Maharashtra, India
Remote
Experience : 2.00 + years Salary : INR 2500000-4000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: ONDO Systems) (*Note: This is a requirement for one of Uplers' client - ONDO Systems) What do you need for this opportunity? Must have skills required: Micro services, Restful APIs, Spring Boot, AWS, Docker, Java, Kubernetes, MySQL, NO SQL ONDO Systems is Looking for: Key Responsibilities: Design, develop, and deploy backend services using Java technologies. Implement and maintain RESTful APIs for seamless integration with frontend applications. Utilize AWS Cloud services such as EC2, S3, Lambda, RDS, and DynamoDB, Timestream for scalable and reliable infrastructure. Optimize backend performance and ensure high availability and fault tolerance. Requirements: Proven experience as a Backend Developer with strong proficiency in Java programming language. Hands-on experience with AWS Cloud services and tools, particularly EC2, S3, Lambda, RDS, and DynamoDB. Solid understanding of RESTful API design principles and best practices. Experience with relational and NoSQL databases. Familiarity with containerization technologies such as Docker and orchestration tools like Kubernetes is a plus. Ability to work effectively in a fast-paced, agile environment. Engagement Model::Direct contract with client This is remote role. Shift timing::10 AM to 7 PM Interview Rounds:: 3 How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 5 days ago
2.0 years
25 - 40 Lacs
Nashik, Maharashtra, India
Remote
Experience : 2.00 + years Salary : INR 2500000-4000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: ONDO Systems) (*Note: This is a requirement for one of Uplers' client - ONDO Systems) What do you need for this opportunity? Must have skills required: Micro services, Restful APIs, Spring Boot, AWS, Docker, Java, Kubernetes, MySQL, NO SQL ONDO Systems is Looking for: Key Responsibilities: Design, develop, and deploy backend services using Java technologies. Implement and maintain RESTful APIs for seamless integration with frontend applications. Utilize AWS Cloud services such as EC2, S3, Lambda, RDS, and DynamoDB, Timestream for scalable and reliable infrastructure. Optimize backend performance and ensure high availability and fault tolerance. Requirements: Proven experience as a Backend Developer with strong proficiency in Java programming language. Hands-on experience with AWS Cloud services and tools, particularly EC2, S3, Lambda, RDS, and DynamoDB. Solid understanding of RESTful API design principles and best practices. Experience with relational and NoSQL databases. Familiarity with containerization technologies such as Docker and orchestration tools like Kubernetes is a plus. Ability to work effectively in a fast-paced, agile environment. Engagement Model::Direct contract with client This is remote role. Shift timing::10 AM to 7 PM Interview Rounds:: 3 How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 5 days ago
2.0 years
25 - 40 Lacs
Nagpur, Maharashtra, India
Remote
Experience : 2.00 + years Salary : INR 2500000-4000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: ONDO Systems) (*Note: This is a requirement for one of Uplers' client - ONDO Systems) What do you need for this opportunity? Must have skills required: Micro services, Restful APIs, Spring Boot, AWS, Docker, Java, Kubernetes, MySQL, NO SQL ONDO Systems is Looking for: Key Responsibilities: Design, develop, and deploy backend services using Java technologies. Implement and maintain RESTful APIs for seamless integration with frontend applications. Utilize AWS Cloud services such as EC2, S3, Lambda, RDS, and DynamoDB, Timestream for scalable and reliable infrastructure. Optimize backend performance and ensure high availability and fault tolerance. Requirements: Proven experience as a Backend Developer with strong proficiency in Java programming language. Hands-on experience with AWS Cloud services and tools, particularly EC2, S3, Lambda, RDS, and DynamoDB. Solid understanding of RESTful API design principles and best practices. Experience with relational and NoSQL databases. Familiarity with containerization technologies such as Docker and orchestration tools like Kubernetes is a plus. Ability to work effectively in a fast-paced, agile environment. Engagement Model::Direct contract with client This is remote role. Shift timing::10 AM to 7 PM Interview Rounds:: 3 How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 5 days ago
2.0 - 7.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Summary Position Summary AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. Together with the AI & Engineering (AI&E) practice, our AI & Data offering helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe action Job Title: Data Scientist/Machine Learning Engg Job Summary:We are seeking a Data Scientist with experience in leveraging data, machine learning, statistics and AI technologies to generate insights and inform decision-making. You will work on large-scale data ecosystems and collaborate with a team to implement data-driven solutions. Key Responsibilities : Deliver large-scale DS/ML end to end projects across multiple industries and domains Liaison with on-site and client teams to understand various business problem statements, use cases and project requirements Work with a team of Data Engineers, ML/AI Engineers, DevOps, and other Data & AI professionals to deliver projects from inception to implementation Utilize maths/stats, AI, and cognitive techniques to analyze and process data, predict scenarios, and prescribe actions. Drive a human-led culture of Inclusion & Diversity by caring deeply for all team members Qualifications : 2-7 years of relevant hands-on experience in Data Science, Machine Learning, Statistical Modeling Bachelor’s or Master’s degree in a quantitative field. Must have strong hands-on experience with programming languages like Python, PySpark and SQL, and frameworks such as Numpy, Pandas, Scikit-learn, etc. Expertise in Classification, Regression, Time series, Decision Trees, Optimization, etc. Hands on knowledge of Docker containerization, GIT, Tableau or PowerBI Model deployment on Cloud or On-prem will be an added advantage Familiar with Databricks, Snowflake, or Hyperscalers (AWS/Azure/GCP/NVIDIA) Should follow research papers, comprehend and innovate/present the best approaches/solutions related to DS/ML AI/Cloud certification from a premier institute is preferred. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 300100 Show more Show less
Posted 5 days ago
2.0 years
25 - 40 Lacs
Kanpur, Uttar Pradesh, India
Remote
Experience : 2.00 + years Salary : INR 2500000-4000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: ONDO Systems) (*Note: This is a requirement for one of Uplers' client - ONDO Systems) What do you need for this opportunity? Must have skills required: Micro services, Restful APIs, Spring Boot, AWS, Docker, Java, Kubernetes, MySQL, NO SQL ONDO Systems is Looking for: Key Responsibilities: Design, develop, and deploy backend services using Java technologies. Implement and maintain RESTful APIs for seamless integration with frontend applications. Utilize AWS Cloud services such as EC2, S3, Lambda, RDS, and DynamoDB, Timestream for scalable and reliable infrastructure. Optimize backend performance and ensure high availability and fault tolerance. Requirements: Proven experience as a Backend Developer with strong proficiency in Java programming language. Hands-on experience with AWS Cloud services and tools, particularly EC2, S3, Lambda, RDS, and DynamoDB. Solid understanding of RESTful API design principles and best practices. Experience with relational and NoSQL databases. Familiarity with containerization technologies such as Docker and orchestration tools like Kubernetes is a plus. Ability to work effectively in a fast-paced, agile environment. Engagement Model::Direct contract with client This is remote role. Shift timing::10 AM to 7 PM Interview Rounds:: 3 How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 5 days ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Experience: 5+ Years Role Overview: Responsible for designing, building, and maintaining scalable data pipelines and architectures. This role requires expertise in SQL, ETL frameworks, big data technologies, cloud services, and programming languages to ensure efficient data processing, storage, and integration across systems. Requirements: • Minimum 5+ years of experience as a Data Engineer or similar data-related role. • Strong proficiency in SQL for querying databases and performing data transformations. • Experience with data pipeline frameworks (e.g., Apache Airflow, Luigi, or custom-built solutions). • Proficiency in at least one programming language such as Python, Java, or Scala for data processing tasks. • Experience with cloud-based data services and Datalakes (e.g., Snowflake, Databricks, AWS S3, GCP BigQuery, or Azure Data Lake). • Familiarity with big data technologies (e.g., Hadoop, Spark, Kafka). • Experience with ETL tools (e.g., Talend, Apache NiFi, SSIS, etc.) and data integration techniques. • Knowledge of data warehousing concepts and database design principles. • Good understanding of NoSQL and Big Data Technologies like MongoDB, Cassandra, Spark, Hadoop, Hive, • Experience with data modeling and schema design for OLAP and OLTP systems. • Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes). Educational Qualification: Bachelor’s/Master’s degree in computer science, Information Technology, or a related field. Show more Show less
Posted 5 days ago
6.0 - 10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Summary Position Summary Job Title: Senior Data Scientist/Team Lead Job Summary: We are seeking a Senior Data Scientist with hand-on experience in leveraging data, machine learning, statistics and AI technologies to generate insights and inform decision-making. You will work on large-scale data ecosystems and lead a team to implement data-driven solutions. Key Responsibilities: Lead and deliver large-scale DS/ML end to end projects across multiple industries and domains Liaison with on-site and client teams to understand various business problem statements, use cases and project requirements Lead a team of Data Engineers, ML/AI Engineers, DevOps, and other Data & AI professionals to deliver projects from inception to implementation Utilize maths/stats, AI, and cognitive techniques to analyze and process data, predict scenarios, and prescribe actions. Assist and participate in pre-sales, client pursuits and proposals Drive a human-led culture of Inclusion & Diversity by caring deeply for all team members Qualifications: 6-10 years of relevant hands-on experience in Data Science, Machine Learning, Statistical Modeling Bachelor’s or Master’s degree in a quantitative field Led a 3-5 member team on multiple end to end DS/ML projects Excellent communication and client/stakeholder management skills Must have strong hands-on experience with programming languages like Python, PySpark and SQL, and frameworks such as Numpy, Pandas, Scikit-learn, etc. Expertise in Classification, Regression, Time series, Decision Trees, Optimization, etc. Hands on knowledge of Docker containerization, GIT, Tableau or PowerBI Model deployment on Cloud or On-prem will be an added advantage Familiar with Databricks, Snowflake, or Hyperscalers (AWS/Azure/GCP/NVIDIA) Should follow research papers, comprehend and innovate/present the best approaches/solutions related to DS/ML AI/Cloud certification from a premier institute is preferred. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 300022 Show more Show less
Posted 5 days ago
2.0 - 7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Summary Position Summary AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. Together with the AI & Engineering (AI&E) practice, our AI & Data offering helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe action Job Title: Data Scientist/Machine Learning Engg Job Summary:We are seeking a Data Scientist with experience in leveraging data, machine learning, statistics and AI technologies to generate insights and inform decision-making. You will work on large-scale data ecosystems and collaborate with a team to implement data-driven solutions. Key Responsibilities : Deliver large-scale DS/ML end to end projects across multiple industries and domains Liaison with on-site and client teams to understand various business problem statements, use cases and project requirements Work with a team of Data Engineers, ML/AI Engineers, DevOps, and other Data & AI professionals to deliver projects from inception to implementation Utilize maths/stats, AI, and cognitive techniques to analyze and process data, predict scenarios, and prescribe actions. Drive a human-led culture of Inclusion & Diversity by caring deeply for all team members Qualifications : 2-7 years of relevant hands-on experience in Data Science, Machine Learning, Statistical Modeling Bachelor’s or Master’s degree in a quantitative field. Must have strong hands-on experience with programming languages like Python, PySpark and SQL, and frameworks such as Numpy, Pandas, Scikit-learn, etc. Expertise in Classification, Regression, Time series, Decision Trees, Optimization, etc. Hands on knowledge of Docker containerization, GIT, Tableau or PowerBI Model deployment on Cloud or On-prem will be an added advantage Familiar with Databricks, Snowflake, or Hyperscalers (AWS/Azure/GCP/NVIDIA) Should follow research papers, comprehend and innovate/present the best approaches/solutions related to DS/ML AI/Cloud certification from a premier institute is preferred. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 300100 Show more Show less
Posted 5 days ago
0 years
0 Lacs
India
Remote
💼 Senior Backend Developer – Java & GCP 📍 Location: Remote (India) 🕒 Type: Contract (Long-term engagement) We’re looking for a skilled backend engineer with a strong foundation in Java and hands-on experience with Google Cloud technologies to join a high-performing team building cloud-native solutions. 🔧 Key Skills & Requirements ✅ Strong expertise in Java 8 and above ✅ Experience with Python is a big plus ✅ Hands-on experience with Google Cloud Platform (GCP) tools: Dataproc Dataflow BigQuery Pub/Sub ✅ Proficient with containerization technologies : Kubernetes OpenShift Docker ✅ Solid understanding of CI/CD pipelines ✅ Familiarity with observability & monitoring tools like ELK or similar 📌 Why Join? ✔ Work on high-impact, scalable cloud systems ✔ Leverage modern DevOps and GCP practices ✔ 100% remote flexibility Show more Show less
Posted 5 days ago
2.0 - 4.0 years
0 Lacs
Thiruvananthapuram, Kerala, India
On-site
Location: Trivandrum About us At Arbor, we're on a mission to transform the way schools work for the better. We believe in a future of work in schools where being challenged doesn't mean being burnt out and overworked. Where data guides progress without overwhelming staff. And where everyone working in a school is reminded why they got into education every day. Our MIS and school management tools are already making a difference in over 7,000 schools and trusts. Giving time and power back to staff, turning data into clear, actionable insights, and supporting happier working days. At the heart of our brand is a recognition that the challenges schools face today aren't just about efficiency, outputs and productivity - but about creating happier working lives for the people who drive education everyday: the staff. We want to make schools more joyful places to work, as well as learn. About the role We're seeking a PHP Backend Developer (Platform) with 2-4 years of hands-on experience in developing and maintaining scalable backend systems. The ideal candidate is well-versed in PHP and modern frameworks such as Symfony /Laravel , with a solid understanding of OOPs, writing unit test cases, RESTful APIs, MySQL database management, and performance optimization techniques. You'll work closely with product managers, and engineering teams to deliver reliable, high-quality features. Familiarity with cloud platforms like AWS is a strong advantage. A strong emphasis on clean, maintainable code and the ability to troubleshoot production issues is essential. We value a passion for continuous learning and a collaborative approach to cross-functional teamwork. Core responsibilities Develop core platform components to aid reusability and stability of the system Work with Head of Platform Engineering/SRE to identify and progress platform improvements related to stability, scalability, and performance Work with the QA automation framework to ensure functionality is delivered to a high quality Work with DevOps Engineers to understand application impacts and system performance and stability, and work with engineering teams to rectify Assist in incident response and resolution, and subsequent post-mortems and retrospectives Contribute to the platform code base and framework which is used by Product Engineers across Engineering Requirements About you Experience of PHP at scale through frameworks such as Symfony /Laravel Experience of distributed cloud systems, and specifically Amazon Web Services Enterprise Software design patterns and their implementation in real-world enterprise systems Experience of message queuing and/or streaming systems such as SQS, ActiveMQ, Apache Kafka, AWS Kinesis, AWS Firehose Understanding of relational database technologies and their cloud versions (e.g. AWS MySQL Aurora) Experience with DataDog, Prometheus or similar observability tools A positive and proactive attitude to problem solving A team player, willing to muck in and help others when needed, driven personality who asks questions and actively participates in discussions Bonus skills Past experience with enterprise solutions running at scale Familiarity with Scrum methodology or other agile development processes Experience with Docker and containerization Experience with AWS or other Cloud Infrastructure Familiarity with software best practices such as Refactoring, Clean Code, Domain-Driven Design, Test-Driven Development, etc. Benefits Interview process Phone screen 1st stage 2nd stage We are committed to a fair and comfortable recruitment process, so if you require any reasonable adjustments during your application or interview process, please reach out to a member of the team at careers@arbor-education.com. What we offer The chance to work alongside a team of hard-working, passionate people in a role where you'll see the impact of your work everyday. We also offer: Flexible work environment (3 days work from office) Group Term Life Insurance paid out at 3x Annual CTC (Arbor India) 32 days holiday (plus Arbor Holidays). This is made up of 25 days annual leave plus 7 extra companywide days given over Easter, Summer & Christmas Work time: 9.30 am to 6 pm (8.5 hours only) Compensation - 100% fixed salary disbursement and no variable component Arbor Education is an equal opportunities organisation Our goal is for Arbor to be a workplace which represents, celebrates and supports people from all backgrounds, and which gives them the tools they need to thrive - whatever their ambitions may be so we support and promote diversity and equality, and actively encourage applications from people of all backgrounds. Refer a friend Know someone else who would be good for this role? You can refer a friend, family member or colleague, if they are offered a role with Arbor, we will say thank you with a voucher valued up to £200! Simply email: careers@arbor-education.com Show more Show less
Posted 5 days ago
6.0 - 10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Summary Position Summary Job Title: Senior Data Scientist/Team Lead Job Summary: We are seeking a Senior Data Scientist with hand-on experience in leveraging data, machine learning, statistics and AI technologies to generate insights and inform decision-making. You will work on large-scale data ecosystems and lead a team to implement data-driven solutions. Key Responsibilities: Lead and deliver large-scale DS/ML end to end projects across multiple industries and domains Liaison with on-site and client teams to understand various business problem statements, use cases and project requirements Lead a team of Data Engineers, ML/AI Engineers, DevOps, and other Data & AI professionals to deliver projects from inception to implementation Utilize maths/stats, AI, and cognitive techniques to analyze and process data, predict scenarios, and prescribe actions. Assist and participate in pre-sales, client pursuits and proposals Drive a human-led culture of Inclusion & Diversity by caring deeply for all team members Qualifications: 6-10 years of relevant hands-on experience in Data Science, Machine Learning, Statistical Modeling Bachelor’s or Master’s degree in a quantitative field Led a 3-5 member team on multiple end to end DS/ML projects Excellent communication and client/stakeholder management skills Must have strong hands-on experience with programming languages like Python, PySpark and SQL, and frameworks such as Numpy, Pandas, Scikit-learn, etc. Expertise in Classification, Regression, Time series, Decision Trees, Optimization, etc. Hands on knowledge of Docker containerization, GIT, Tableau or PowerBI Model deployment on Cloud or On-prem will be an added advantage Familiar with Databricks, Snowflake, or Hyperscalers (AWS/Azure/GCP/NVIDIA) Should follow research papers, comprehend and innovate/present the best approaches/solutions related to DS/ML AI/Cloud certification from a premier institute is preferred. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 300022 Show more Show less
Posted 5 days ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
We are seeking a Full Stack Python Developer with strong experience in both frontend and backend development, and deep familiarity with Azure Cloud and serverless architecture. In this full-time role, you’ll build modern, scalable web applications and services using Python, JavaScript frameworks, and Azure-native tools. You’ll work cross-functionally to develop secure, performant, and user-friendly applications that run entirely in the cloud. Job Description: Responsibilities : Develop end-to-end web applications using Python for the backend and React / JavaScript for the frontend. Design, build, and deploy serverless applications on Microsoft Azure using services like: Azure Functions Azure API Management Azure Blob Storage Azure Cosmos DB / Mongo DB Strong experience with using Python Runtime inside Azure Functions, and building serverless functions using the Python v2 programming model and Azure Blueprints. Use Blueprints to define and register new Azure Functions Use Python Modules and an Object-Oriented Programming model to modularize function definition and implementation Build and maintain RESTful APIs, microservices, and integrations with third-party services. Work closely with designers, PMs, and QA to deliver high-quality, user-centric applications. Optimize applications for performance, scalability, and cost-efficiency on Azure. Implement DevOps practices using CI/CD pipelines. Write clean, modular, and well-documented code, following best practices and secure coding guidelines. Participate in sprint planning, code reviews, and agile ceremonies. Required Skills (Must Have): 3–5 years of professional experience in full stack development. Strong proficiency in Object-Orriented Python, with frameworks like FastAPI, Flask, or Django. Solid experience with frontend frameworks such as React.js, or similar. Proven experience with Azure Serverless Architecture, including: Azure Functions Azure API Management Azure Storage & Cosmos DB Understanding of event-driven architecture, and asynchronous APIs in Azure. Experience working with Azure Serverless functions including Durable Functions within Azure Experience with API integrations, secure data handling, and cloud-native development. Proficient in working with Git, Agile methodologies, and software development best practices. Ability to design and develop scalable and efficient applications. Excellent problem-solving and analytical skills. Strong communication and teamwork abilities. Preferred Skills (Good to Have): Experience with Azure App Service, Azure Key Vault, Application Insights, and Azure Monitor for observability and secure deployments. Familiarity with authentication and authorization mechanisms, such as Azure Active Directory (Azure AD), OAuth2, and JWT. Exposure to containerization technologies including Docker, Azure Container Registry (ACR), and Azure Kubernetes Service (AKS). Understanding of cost optimization, resilience, and security best practices in cloud-native and serverless applications. Knowledge of integration with Azure OpenAI service and working with LLM models inside Azure apps Knowledge of LLM frameworks such as LangChain, LlamaIndex, and experience in building intelligent solutions using AI agents and orchestration frameworks. Awareness of modern AI application architecture, including Retrieval-Augmented Generation (RAG) and semantic search. Qualifications : Bachelor’s degree in computer science, Computer Engineering, or a related field. 3+ years of experience in software development. Strong understanding of building cloud-native applications in a serverless ecosystem. Strong understanding of software development methodologies (e.g., Agile). Location: DGS India - Pune - Kharadi EON Free Zone Brand: Dentsu Creative Time Type: Full time Contract Type: Permanent Show more Show less
Posted 5 days ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
We are seeking a Full Stack Python Developer with strong experience in both frontend and backend development, and deep familiarity with Azure Cloud and serverless architecture. In this full-time role, you’ll build modern, scalable web applications and services using Python, JavaScript frameworks, and Azure-native tools. You’ll work cross-functionally to develop secure, performant, and user-friendly applications that run entirely in the cloud. Job Description: Responsibilities : Develop end-to-end web applications using Python for the backend and React / JavaScript for the frontend. Design, build, and deploy serverless applications on Microsoft Azure using services like: Azure Functions Azure API Management Azure Blob Storage Azure Cosmos DB / Mongo DB Strong experience with using Python Runtime inside Azure Functions, and building serverless functions using the Python v2 programming model and Azure Blueprints. Use Blueprints to define and register new Azure Functions Use Python Modules and an Object-Oriented Programming model to modularize function definition and implementation Build and maintain RESTful APIs, microservices, and integrations with third-party services. Work closely with designers, PMs, and QA to deliver high-quality, user-centric applications. Optimize applications for performance, scalability, and cost-efficiency on Azure. Implement DevOps practices using CI/CD pipelines. Write clean, modular, and well-documented code, following best practices and secure coding guidelines. Participate in sprint planning, code reviews, and agile ceremonies. Required Skills (Must Have): 3–5 years of professional experience in full stack development. Strong proficiency in Object-Orriented Python, with frameworks like FastAPI, Flask, or Django. Solid experience with frontend frameworks such as React.js, or similar. Proven experience with Azure Serverless Architecture, including: Azure Functions Azure API Management Azure Storage & Cosmos DB Understanding of event-driven architecture, and asynchronous APIs in Azure. Experience working with Azure Serverless functions including Durable Functions within Azure Experience with API integrations, secure data handling, and cloud-native development. Proficient in working with Git, Agile methodologies, and software development best practices. Ability to design and develop scalable and efficient applications. Excellent problem-solving and analytical skills. Strong communication and teamwork abilities. Preferred Skills (Good to Have): Experience with Azure App Service, Azure Key Vault, Application Insights, and Azure Monitor for observability and secure deployments. Familiarity with authentication and authorization mechanisms, such as Azure Active Directory (Azure AD), OAuth2, and JWT. Exposure to containerization technologies including Docker, Azure Container Registry (ACR), and Azure Kubernetes Service (AKS). Understanding of cost optimization, resilience, and security best practices in cloud-native and serverless applications. Knowledge of integration with Azure OpenAI service and working with LLM models inside Azure apps Knowledge of LLM frameworks such as LangChain, LlamaIndex, and experience in building intelligent solutions using AI agents and orchestration frameworks. Awareness of modern AI application architecture, including Retrieval-Augmented Generation (RAG) and semantic search. Qualifications : Bachelor’s degree in computer science, Computer Engineering, or a related field. 3+ years of experience in software development. Strong understanding of building cloud-native applications in a serverless ecosystem. Strong understanding of software development methodologies (e.g., Agile). Location: DGS India - Pune - Kharadi EON Free Zone Brand: Dentsu Creative Time Type: Full time Contract Type: Permanent Show more Show less
Posted 5 days ago
2.0 - 7.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Summary Position Summary AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. Together with the AI & Engineering (AI&E) practice, our AI & Data offering helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe action Job Title: Data Scientist/Machine Learning Engg Job Summary:We are seeking a Data Scientist with experience in leveraging data, machine learning, statistics and AI technologies to generate insights and inform decision-making. You will work on large-scale data ecosystems and collaborate with a team to implement data-driven solutions. Key Responsibilities : Deliver large-scale DS/ML end to end projects across multiple industries and domains Liaison with on-site and client teams to understand various business problem statements, use cases and project requirements Work with a team of Data Engineers, ML/AI Engineers, DevOps, and other Data & AI professionals to deliver projects from inception to implementation Utilize maths/stats, AI, and cognitive techniques to analyze and process data, predict scenarios, and prescribe actions. Drive a human-led culture of Inclusion & Diversity by caring deeply for all team members Qualifications : 2-7 years of relevant hands-on experience in Data Science, Machine Learning, Statistical Modeling Bachelor’s or Master’s degree in a quantitative field. Must have strong hands-on experience with programming languages like Python, PySpark and SQL, and frameworks such as Numpy, Pandas, Scikit-learn, etc. Expertise in Classification, Regression, Time series, Decision Trees, Optimization, etc. Hands on knowledge of Docker containerization, GIT, Tableau or PowerBI Model deployment on Cloud or On-prem will be an added advantage Familiar with Databricks, Snowflake, or Hyperscalers (AWS/Azure/GCP/NVIDIA) Should follow research papers, comprehend and innovate/present the best approaches/solutions related to DS/ML AI/Cloud certification from a premier institute is preferred. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 300100 Show more Show less
Posted 5 days ago
2.0 - 7.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Summary Position Summary AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. Together with the AI & Engineering (AI&E) practice, our AI & Data offering helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe action Job Title: Data Scientist/Machine Learning Engg Job Summary:We are seeking a Data Scientist with experience in leveraging data, machine learning, statistics and AI technologies to generate insights and inform decision-making. You will work on large-scale data ecosystems and collaborate with a team to implement data-driven solutions. Key Responsibilities : Deliver large-scale DS/ML end to end projects across multiple industries and domains Liaison with on-site and client teams to understand various business problem statements, use cases and project requirements Work with a team of Data Engineers, ML/AI Engineers, DevOps, and other Data & AI professionals to deliver projects from inception to implementation Utilize maths/stats, AI, and cognitive techniques to analyze and process data, predict scenarios, and prescribe actions. Drive a human-led culture of Inclusion & Diversity by caring deeply for all team members Qualifications : 2-7 years of relevant hands-on experience in Data Science, Machine Learning, Statistical Modeling Bachelor’s or Master’s degree in a quantitative field. Must have strong hands-on experience with programming languages like Python, PySpark and SQL, and frameworks such as Numpy, Pandas, Scikit-learn, etc. Expertise in Classification, Regression, Time series, Decision Trees, Optimization, etc. Hands on knowledge of Docker containerization, GIT, Tableau or PowerBI Model deployment on Cloud or On-prem will be an added advantage Familiar with Databricks, Snowflake, or Hyperscalers (AWS/Azure/GCP/NVIDIA) Should follow research papers, comprehend and innovate/present the best approaches/solutions related to DS/ML AI/Cloud certification from a premier institute is preferred. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 300100 Show more Show less
Posted 5 days ago
7.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Who We Are Zinnia is the leading technology platform for accelerating life and annuities growth. With innovative enterprise solutions and data insights, Zinnia simplifies the experience of buying, selling, and administering insurance products. All of which enables more people to protect their financial futures. Our success is driven by a commitment to three core values: be bold, team up, deliver value – and that we do. Zinnia has over $180 billion in assets under administration, serves 100+ carrier clients, 2500 distributors and partners, and over 2 million policyholders. Who You Are We are looking for a passionate and skilled Python, AI/ML Lead Engineer with 7+ years of experience to join our team. As a Python/AI Lead, you will oversee the development and implementation of machine learning models, AI-driven solutions, and data-driven products. You will guide a team of engineers and collaborate with cross-functional teams to ensure the timely and high-quality delivery of projects. If you thrive in a fast-paced environment and love solving complex problems using data and intelligent algorithms, we’d love to hear from you. What You’ll Do Lead the design, development, and deployment of Gen AI and machine learning solutions using Python. Provide technical guidance and mentorship to team members, ensuring code quality, best practices, and performance optimization. Collaborate with product managers, data scientists, and other stakeholders to define project requirements and objectives. Take ownership of the full software development lifecycle (SDLC), from design to deployment, maintenance, and optimization. Perform code reviews, write technical documentation, and maintain high standards of software quality. Stay up to date with the latest trends and advancements in AI, machine learning, and Python development. Identify and mitigate technical risks in projects while ensuring timely delivery. Foster a collaborative environment and promote knowledge sharing across the engineering team. What You’ll Need 7+ years of professional experience in Python development, with at least 2 years in a technical leadership role. Strong hands-on experience with AI/ML frameworks, Gen AI LLMs and libraries such as TensorFlow, Scikit-learn,or Keras. Python – Strong hands-on experience. Machine Learning – Practical knowledge of supervised, unsupervised, and deep learning techniques. Generative AI – Experience working with LLMs or similar GenAI technologies. API Development – RESTful APIs and integration of ML models into production services. Databases – Experience with SQL and NoSQL databases (e.g., PostgreSQL, MongoDB, etc.). Good To Have Skills Cloud Platforms – Familiarity with AWS, Azure, or Google Cloud Platform (GCP). TypeScript/JavaScript – Frontend or full-stack exposure for ML product interfaces. Experience with MLOps tools and practices (e.g., MLflow, Kubeflow, etc.) Exposure to containerization (Docker) and orchestration (Kubernetes). WHAT’S IN IT FOR YOU? At Zinnia, you collaborate with smart, creative professionals who are dedicated to delivering cutting-edge technologies, deeper data insights, and enhanced services to transform how insurance is done. Visit our website at www.zinnia.com for more information. Apply by completing the online application on the careers section of our website. We are an Equal Opportunity employer committed to a diverse workforce. We do not discriminate based on race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability Show more Show less
Posted 5 days ago
5.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Experience: 5+ years Employment Type: Full-time About The Role We are looking for an experienced and highly skilled MEAN Stack Developer to join our dynamic development team. The ideal candidate will have hands-on experience building scalable web applications using MongoDB, Express.js, Angular, and Node.js , and will be responsible for developing end-to-end features with strong attention to detail, performance, and maintainability. Key Responsibilities Design, develop, test, and deploy robust and scalable web applications using the MEAN stack (MongoDB, Express.js, Angular, Node.js). Collaborate with cross-functional teams including UI/UX designers, product managers, and QA engineers to deliver high-quality features. Develop RESTful APIs and integrate third-party services or libraries as needed. Write clean, efficient, and well-documented code that follows industry best practices. Optimize application performance and troubleshoot production issues. Participate in code reviews, architecture discussions, and agile ceremonies. Ensure responsiveness and cross-browser compatibility of applications. Stay updated with emerging technologies and industry trends. Required Skills & Qualifications Bachelor’s degree in Computer Science, Engineering, or a related field. Minimum 5 years of hands-on experience in MEAN stack development. Proficiency in JavaScript, TypeScript, HTML5, CSS3, and Angular (v8 or above). Strong expertise in Node.js and Express.js for backend development. Solid understanding of MongoDB database design and aggregation pipelines. Experience with RESTful APIs, JSON, and web services. Familiarity with Git, CI/CD pipelines, and version control best practices. Strong problem-solving skills and the ability to work independently or in a team. Excellent verbal and written communication skills. Preferred Skills (Good To Have) Experience with Microservices architecture. Familiarity with Docker, Kubernetes, or other containerization tools. Exposure to cloud platforms such as AWS, Azure, or GCP. Basic understanding of DevOps practices and tools. Show more Show less
Posted 5 days ago
2.0 - 5.0 years
0 Lacs
Mohali district, India
Remote
Job Summary We are looking for a skilled Full Stack Developer specializing in ReactJS and NodeJS to join our dynamic development team. The ideal candidate will have 2 to 5 years of experience in full-stack development and possess expertise in building scalable web applications, integrating APIs, and delivering high -quality user experiences. This role involves close collaboration with the design, backend, and QA teams to ensure seamless project delivery. Key Responsibilities • Develop, test, and maintain scalable web applications using ReactJS for the frontend and NodeJS for the backend. • Collaborate with designers and other developers to translate UI/UX wireframes into functional and responsive web interfaces. • Build RESTful APIs and integrate third-party services to enhance application functionality. • Write clean, efficient, and maintainable code following best practices and coding standards. • Debug and resolve technical issues across the stack, ensuring optimal performance and user experience. • Participate in code reviews and contribute to improving team processes and quality standards. • Optimize applications for maximum speed and scalability. Required Qualifications • 2 to 5 years of experience in full-stack web development. • Proficiency in ReactJS, Redux, and modern frontend technologies (HTML5, CSS3, JavaScript/TypeScript). • Expertise in NodeJS and backend frameworks like Express.js, Next.js • Strong experience with database technologies like MongoDB, PostgreSQL, or MySQL. • Hands-on experience with RESTful APIs and asynchronous programming. • Familiarity with version control systems (e.g., Git) and CI/CD pipelines. • Knowledge of responsive design principles and cross-browser compatibility. • Strong problem-solving skills and ability to work collaboratively in a team. Must-Have Skills • Expertise in ReactJS and NodeJS development. • Strong understanding of front-end and back-end architecture. • Proficiency in writing clean, testable, and efficient code. • Experience with cloud services (AWS, Azure, or Google Cloud) and deployment processes. Good-to-Have Skills • Knowledge of GraphQL and WebSocket implementation. • Experience with containerization tools like Docker. • Familiarity with state management libraries such as MobX or Context API. • Understanding of Agile development methodologies. Company Overview We specialize in delivering cutting-edge solutions in custom software, web, and AI development. Our work culture is a unique blend of in-office and remote collaboration, prioritizing our employees above everything else. At our company, you’ll find an environment where continuous learning, leadership opportunities, and mutual respect thrive. We are proud to foster a culture where individuals are valued, encouraged to evolve, and supported in achieving their fullest potential. Benefits and Perks • Competitive Salary: Earn up to ₹6 –10 LPA based on skills and experience. • Generous Time Off: Benefit from 18 annual holidays to maintain a healthy work-life balance. • Continuous Learning: Access extensive learning opportunities while working on cutting-edge projects. • Client Exposure: Gain valuable experience in client-facing roles to enhance your professional growth. Show more Show less
Posted 5 days ago
3.0 - 13.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Hi Connections, We are Hiring: - Java Developer Location: - Anywhere In India Experience: - 3 to 13 Years Requirements: - Java Full Stack Developer Java Backend Developer Strong knowledge of Java, Spring Boot, Hibernate. Hands on experience with Rest APIs and Microservices. Database: - MongoDB Cloud: - AWS/ Azure/ GCP Version Control experience using Git. CI/CD pipeline and containerization like Doker/ Kubernetes. Problem solving skills. Bonus Points for: Experience with Apache Kafka, Oracle, Frontend Skills for full stack. Apply Now: If you are ready to take the next step in your JAVA career, send your resume to divya.rghav@nagarro.com. Let's build something great together. Show more Show less
Posted 5 days ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Technical Skills : Experience with Spring Boot and microservices architecture. Must have experience with Springboot with JPA (Hibernate). Proficient in Java 8+, with a very strong knowledge of its ecosystems and design patterns. Solid understanding of object-oriented programming. Familiarity with concepts of MVC, JDBC, and RESTful APIs. Experience with both external and embedded databases. Understanding of fundamental design principles behind a scalable application. Basic understanding of the class loading mechanism in Java. Basic understanding of JVM, its limitations, weaknesses, and workarounds. Implementing automated testing platforms and unit tests. Good to have experience with containerization and microservices. Soft Skills : Proficient understanding of code versioning tools, such as Git. Familiarity with build tools such as Ant, Maven, and Gradle. Familiarity with continuous integration. Show more Show less
Posted 5 days ago
0.0 - 3.0 years
0 Lacs
Noida, Uttar Pradesh
On-site
We are seeking a highly skilled Automation Engineer with a minimum of 3 years of experience to join our talented team. The ideal candidate will have a passion for automation, a strong technical background, and the ability to drive continuous improvement through innovative solutions. As an Automation Engineer, you will play a crucial role in designing, implementing, and maintaining automated systems to streamline our processes and enhance overall performance. Requirements: Bachelor's degree in Computer Science, Engineering, or a related field. Minimum of 3 years of experience in automation engineering or a similar role. Proficiency in programming languages such as Python, Java, or C#. Strong understanding of software testing methodologies and automation tools (e.g., Selenium, Appium, Robot Framework). Experience with version control systems (e.g., Git) and continuous integration/continuous deployment (CI/CD) pipelines. Experience with API testing, (REST/Open API/events) Excellent problem-solving skills and attention to detail. Ability to work effectively both independently and as part of a team in a fast-paced environment. Strong communication skills with the ability to convey technical concepts to non-technical stakeholders. Experience with containerization and orchestration technologies (e.g., Docker, Kubernetes) is a plus. You are a builder – Execution oriented approach to software development that prioritizes positioning yourself to iteratively deliver substantial value for our clients in a sustainable and timely manner. You proactively work to unblock your tasks and lend help to others in need. You aren’t afraid to think outside of the box and substantiate your arguments and ideas with data and material examples. Demonstrated ability to work well under pressure, thrive in a fast-paced environment, and stay flexible through growth and change Certification in automation or related technologies is desirable. Responsibilities: Collaborate with cross-functional teams to identify opportunities for automation and process optimization. Design, develop, and implement automated solutions using industry-leading tools and technologies. Build and maintain a scalable, reliable, and efficient automation test suite. Conduct thorough testing of automated systems to ensure quality and reliability. Troubleshoot and resolve issues with automation systems in a timely manner. Provide technical support and guidance to internal teams on automation best practices. Stay up-to-date with the latest advancements in automation technology and industry trends. Document automation processes, procedures, and guidelines for reference and training purposes. Participate in code reviews and contribute to the continuous improvement of development practices Job Types: Full-time, Permanent Pay: From ₹700,000.00 per year Benefits: Health insurance Internet reimbursement Life insurance Paid sick time Paid time off Provident Fund Schedule: Day shift Monday to Friday Morning shift Supplemental Pay: Performance bonus Yearly bonus Ability to commute/relocate: Noida, Uttar Pradesh: Reliably commute or planning to relocate before starting work (Preferred) Experience: Test automation: 3 years (Required) Work Location: In person Application Deadline: 20/06/2025 Expected Start Date: 27/07/2025
Posted 5 days ago
10.0 years
0 Lacs
India
Remote
Job Title: MLOps Architect Experience: 10+ Years Location: Remote Type-Contractual About the Role: We are seeking experienced MLOps Engineers to join our growing team of AI/ML professionals. The ideal candidate will have deep experience in machine learning deployment pipelines , infrastructure automation , and monitoring , with a passion for building scalable and reliable systems. This is a remote full-time opportunity with a competitive compensation package. Key Responsibilities: Design, develop, and maintain robust CI/CD pipelines for deploying machine learning models into production. Package and deploy models built with TensorFlow or PyTorch using Azure ML , AWS SageMaker , or GCP Vertex AI . Automate containerization using Docker , and manage deployment using Kubernetes (preferred). Collaborate with data scientists and engineers to streamline model training, evaluation, and deployment workflows. Implement monitoring and alerting systems using tools such as Prometheus , Grafana , and Azure Monitor . Ensure system reliability, availability, and performance optimization across environments. Maintain and manage version control using Git and CI/CD pipelines using GitLab CI/CD or equivalent. Work in a Linux environment, writing scripts to automate deployment, logging, and maintenance tasks. Required Skills: Strong hands-on experience with Docker , CI/CD pipelines , and Linux-based deployments . Experience with cloud platforms: Azure ML (preferred), AWS , or GCP . Proficiency in Python scripting and automation. Familiarity with ML model deployment pipelines and infrastructure as code. Experience with monitoring tools like Grafana , Prometheus , and Cloud-native logging/alerting systems . Working knowledge of Kubernetes (preferred). Ideal Candidate Profile: 6+ years of experience in MLOps, DevOps , or related backend infrastructure roles. Proven experience in deploying TensorFlow/PyTorch models in production environments using Azure ML and GitLab CI/CD . Demonstrated expertise in monitoring , alerting , and performance tuning of ML systems. Strong problem-solving, communication, and documentation skills. Show more Show less
Posted 5 days ago
8.0 years
0 Lacs
Kochi, Kerala, India
On-site
Role Open Positions Mandatory Skillset Experience Work Location NP Budget Senior Azure DevOps Engineer 1 Azure, CI/CD, Containerisaton 8+ Years, Relevant 5+ Years TVM/Kochi Immediate only Max. 20 LPA We are seeking an experienced Azure DevOps Engineer to manage and optimize our cloud infrastructure, CI/CD pipelines, version control, and platform automation. The ideal candidate will be responsible for ensuring efficient deployments, security compliance, and operational reliability. This role requires collaboration with development, QA, and DevOps teams to enhance software delivery and infrastructure management. Key Responsibilities Infrastructure Management Design and manage Azure-based infrastructure for scalable and resilient applications. Implement and manage Azure Container Apps to support microservices-based architecture. CI/CD Pipelines Build and maintain CI/CD pipelines using GitHub Actions or equivalent tools. Automate deployment workflows to ensure quick and reliable application delivery. Version Control and Collaboration Manage GitHub repositories, branching strategies, and pull request workflows. Ensure repository compliance and enforce best practices for source control. Platform Automation Develop scripts and tooling to automate repetitive tasks and improve efficiency. Use Infrastructure as Code (IaC) tools like Terraform or Bicep for resource provisioning. Monitoring and Optimization Set up monitoring and alerting for platform reliability using Azure Monitor and Application Insights. Analyze performance metrics and implement optimizations for cost and efficiency improvements. Collaboration and Support Work closely with development, DevOps, and QA teams to streamline deployment processes. Troubleshoot and resolve issues in production and non-production environments. GitHub Management Manage GitHub repositories, including permissions, branch policies, and pull request workflows. Implement GitHub Actions for automated testing, builds, and deployments. Enforce security compliance through GitHub Advanced Security features (e.g., secret scanning, Dependabot). Design and implement branching strategies to support collaborative software development. Maintain GitHub templates for issues, pull requests, and contributing guidelines. Monitor repository usage, optimize workflows, and ensure scalability of GitHub services. Operational Support Maintain pipeline health and resolve incidents related to deployment and infrastructure. Address defects, validate certificates, and ensure platform consistency. Resolve issues with offline services, manage private runners, and apply security patches. Monitor page performance using tools like Lighthouse. Manage server maintenance, repository infrastructure, and access control. Pipeline Development Develop reusable workflows for builds, deployments, SonarQube integrations, Jira integrations, release notes, notifications, and reporting. Implement branching and versioning management strategies. Identify pipeline failures and develop automated recovery mechanisms. Customize configurations for various projects (Mobile, Leapfrog, AEM/Hybris). Testing Integration Implement automated testing, feedback loops, and quality gates. Manage SonarQube configurations, rulesets, and runner maintenance. Maintain SonarQube EE deployment in Azure Container Apps. Configure and integrate security tools like Dependabot and Snyk with Jira. Work Collaboration Integration Integrate JIRA for automatic ticket generation, story validation, and release management. Configure Teams for API management, channels, and chat management. Set up email alerting mechanisms. Support IFS/CR process integration. Required Skills & Qualifications Cloud Platforms: Azure (Azure Container Apps, Azure Monitor, Application Insights). CI/CD Tools: GitHub Actions, Terraform, Bicep. Version Control: GitHub repository management, branching strategies, pull request workflows. Security & Compliance: GitHub Advanced Security, Dependabot, Snyk. Automation & Scripting: Terraform, Bicep, Shell scripting. Monitoring & Performance: Azure Monitor, Lighthouse. Testing & Quality Assurance: SonarQube, Automated testing. Collaboration Tools: JIRA, Teams, Email Alerting. Preferred Qualifications Experience in microservices architecture and containerized applications. Strong understanding of DevOps methodologies and best practices. Excellent troubleshooting skills for CI/CD pipelines and infrastructure issues. Skills: containerization,jira,azure,sla,automation & scripting,teams,security & compliance,ci/cd,application insights,github repository management,azure monitor,devops,sonarqube,email alerting,azure kubernetes,,terraform,git,cicd,bicep,github actions,cd,aks,ci Show more Show less
Posted 5 days ago
8.0 years
0 Lacs
Thiruvananthapuram, Kerala, India
On-site
Role Open Positions Mandatory Skillset Experience Work Location NP Budget Senior Azure DevOps Engineer 1 Azure, CI/CD, Containerisaton 8+ Years, Relevant 5+ Years TVM/Kochi Immediate only Max. 20 LPA We are seeking an experienced Azure DevOps Engineer to manage and optimize our cloud infrastructure, CI/CD pipelines, version control, and platform automation. The ideal candidate will be responsible for ensuring efficient deployments, security compliance, and operational reliability. This role requires collaboration with development, QA, and DevOps teams to enhance software delivery and infrastructure management. Key Responsibilities Infrastructure Management Design and manage Azure-based infrastructure for scalable and resilient applications. Implement and manage Azure Container Apps to support microservices-based architecture. CI/CD Pipelines Build and maintain CI/CD pipelines using GitHub Actions or equivalent tools. Automate deployment workflows to ensure quick and reliable application delivery. Version Control and Collaboration Manage GitHub repositories, branching strategies, and pull request workflows. Ensure repository compliance and enforce best practices for source control. Platform Automation Develop scripts and tooling to automate repetitive tasks and improve efficiency. Use Infrastructure as Code (IaC) tools like Terraform or Bicep for resource provisioning. Monitoring and Optimization Set up monitoring and alerting for platform reliability using Azure Monitor and Application Insights. Analyze performance metrics and implement optimizations for cost and efficiency improvements. Collaboration and Support Work closely with development, DevOps, and QA teams to streamline deployment processes. Troubleshoot and resolve issues in production and non-production environments. GitHub Management Manage GitHub repositories, including permissions, branch policies, and pull request workflows. Implement GitHub Actions for automated testing, builds, and deployments. Enforce security compliance through GitHub Advanced Security features (e.g., secret scanning, Dependabot). Design and implement branching strategies to support collaborative software development. Maintain GitHub templates for issues, pull requests, and contributing guidelines. Monitor repository usage, optimize workflows, and ensure scalability of GitHub services. Operational Support Maintain pipeline health and resolve incidents related to deployment and infrastructure. Address defects, validate certificates, and ensure platform consistency. Resolve issues with offline services, manage private runners, and apply security patches. Monitor page performance using tools like Lighthouse. Manage server maintenance, repository infrastructure, and access control. Pipeline Development Develop reusable workflows for builds, deployments, SonarQube integrations, Jira integrations, release notes, notifications, and reporting. Implement branching and versioning management strategies. Identify pipeline failures and develop automated recovery mechanisms. Customize configurations for various projects (Mobile, Leapfrog, AEM/Hybris). Testing Integration Implement automated testing, feedback loops, and quality gates. Manage SonarQube configurations, rulesets, and runner maintenance. Maintain SonarQube EE deployment in Azure Container Apps. Configure and integrate security tools like Dependabot and Snyk with Jira. Work Collaboration Integration Integrate JIRA for automatic ticket generation, story validation, and release management. Configure Teams for API management, channels, and chat management. Set up email alerting mechanisms. Support IFS/CR process integration. Required Skills & Qualifications Cloud Platforms: Azure (Azure Container Apps, Azure Monitor, Application Insights). CI/CD Tools: GitHub Actions, Terraform, Bicep. Version Control: GitHub repository management, branching strategies, pull request workflows. Security & Compliance: GitHub Advanced Security, Dependabot, Snyk. Automation & Scripting: Terraform, Bicep, Shell scripting. Monitoring & Performance: Azure Monitor, Lighthouse. Testing & Quality Assurance: SonarQube, Automated testing. Collaboration Tools: JIRA, Teams, Email Alerting. Preferred Qualifications Experience in microservices architecture and containerized applications. Strong understanding of DevOps methodologies and best practices. Excellent troubleshooting skills for CI/CD pipelines and infrastructure issues. Skills: containerization,jira,azure,sla,automation & scripting,teams,security & compliance,ci/cd,application insights,github repository management,azure monitor,devops,sonarqube,email alerting,azure kubernetes,,terraform,git,cicd,bicep,github actions,cd,aks,ci Show more Show less
Posted 5 days ago
4.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Role: Devops Engineer / Java Experience: 4-6 Years Job location – Noida with 03 months of on-site training in Singapore Functional Area: Java + DevOps + Kubernetes + AWS/cloud JOB DESCRIPTION: About the Role: We are seeking a highly motivated DevOps Engineer to join our team and play a pivotal role in building and maintaining our cloud infrastructure. The ideal candidate will have a strong understanding of DevOps principles and practices, with a focus on AWS, Kubernetes, CI/CD pipelines, Docker, and Terraform. Working on a Java scripting Language is must for design, development. Responsibilities: • Cloud Platforms: Design, build, and maintain our cloud infrastructure primarily on AWS. • Infrastructure as Code (IaC): Develop and manage IaC solutions using tools like Terraform to provision and configure cloud resources on AWS. • Containerization: Implement and manage Docker containers and Kubernetes clusters for efficient application deployment and scaling. • CI/CD Pipelines: Develop and maintain automated CI/CD pipelines using tools like Jenkins, Bitbucket CI/CD, or ArgoCD to streamline software delivery. Automation: Automate infrastructure provisioning, configuration management, and application deployment using tools like Terraform and Ansible. • Monitoring and Troubleshooting: Implement robust monitoring and alerting systems to proactively identify and resolve issues. • Collaboration: Work closely with development teams to understand their needs and provide solutions that align with business objectives. • Security: Ensure compliance with security best practices and implement measures to protect our infrastructure and applications. Qualifications: • Bachelor’s degree in computer science, Engineering, or a related field. • Strong proficiency in AWS services (EC2, S3, VPC, IAM, etc.). • Experience with Kubernetes and container orchestration. • Expertise in Java coding, CI/CD pipelines and tools (Jenkins, Bitbucket CI/CD, ArgoCD). • Familiarity with Docker and containerization concepts. • Experience with configuration management tools (Terraform, Cloudformation). • Scripting skills (Java, Python, Bash). • Understanding of networking and security concepts Show more Show less
Posted 5 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Containerization has become a crucial aspect of modern software development, and the job market for containerization roles in India is thriving. Companies across various industries are increasingly adopting containerization technologies like Docker and Kubernetes, creating a high demand for skilled professionals in this field.
The average salary range for containerization professionals in India varies based on experience levels. Entry-level positions can start at around INR 5-8 lakhs per annum, while experienced professionals can earn upwards of INR 15-20 lakhs per annum.
In the containerization domain, a typical career path may involve starting as a Junior Developer, progressing to a Senior Developer, and eventually moving up to a Tech Lead role. Continuous learning and hands-on experience with containerization tools are key to advancing in this field.
Apart from proficiency in containerization technologies, professionals in this field are often expected to have strong skills in networking, cloud computing, automation, and security. Knowledge of scripting languages like Python or Shell scripting can also be beneficial.
As you explore opportunities in the containerization job market in India, remember to stay updated on the latest trends and technologies in this field. With the right skills and preparation, you can confidently pursue a rewarding career in containerization. Best of luck in your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2