Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
13.0 years
0 Lacs
India
On-site
Company Description 👋🏼 We're Nagarro. We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale across all devices and digital mediums, and our people exist everywhere in the world (17500+ experts across 39 countries, to be exact). Our work culture is dynamic and non-hierarchical. We are looking for great new colleagues. That's where you come in! Job Description REQUIREMENTS: Total experience 13+years. Strong working experience in machine learning, with a proven track record of delivering impactful solutions in NLP, machine vision, and AI. Proficiency in programming languages such as Python or R, and experience with data manipulation libraries (e.g., Pandas, NumPy). Strong understanding of statistical concepts and techniques, and experience applying them to real-world problems. Strong programming skills in Python, and proficiency in deep learning frameworks such as TensorFlow, PyTorch, or JAX, as well as machine learning libraries such as scikit-learn. Proficient experience with Generative AI frameworks such as GANs, VAEs, prompt engineering, and retrieval-augmented generation (RAG), and the ability to apply them to real-world problems. Exposure to graph-based models, knowledge graphs, and related query languages. Hands-on skills in data engineering and building robust ML pipelines. Experience with personalization engines and digital marketing platform integrations Architect scalable, cloud-based AI/ML solutions on AWS, Azure, or GCP, integrating data pipelines and AI services. Excellent communication skills and the ability to collaborate effectively with cross-functional teams. RESPONSIBILITIES: Writing and reviewing great quality code. Understanding the client’s business use cases and technical requirements and be able to convert them into technical design which elegantly meets the requirements. Mapping decisions with requirements and be able to translate the same to developers. Identifying different solutions and being able to narrow down the best option that meets the clients’ requirements. Defining guidelines and benchmarks for NFR considerations during project implementation Writing and reviewing design document explaining overall architecture, framework, and high-level design of the application for the developers Reviewing architecture and design on various aspects like extensibility, scalability, security, design patterns, user experience, NFRs, etc., and ensure that all relevant best practices are followed. Developing and designing the overall solution for defined functional and non-functional requirements; and defining technologies, patterns, and frameworks to materialize it Understanding and relating technology integration scenarios and applying these learnings in projects Resolving issues that are raised during code/review, through exhaustive systematic analysis of the root cause, and being able to justify the decision taken. Carrying out POCs to make sure that suggested design/technologies meet the requirements. Qualifications Bachelor’s or master’s degree in computer science, Information Technology, or a related field.
Posted 2 days ago
11.0 years
0 Lacs
India
On-site
Company Description 👋🏼 We're Nagarro. We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale across all devices and digital mediums, and our people exist everywhere in the world (17500+ experts across 39 countries, to be exact). Our work culture is dynamic and non-hierarchical. We are looking for great new colleagues. That's where you come in! Job Description REQUIREMENTS: Total experience 11+years. Strong working experience in machine learning, with a proven track record of delivering impactful solutions in NLP, machine vision, and AI. Proficiency in programming languages such as Python or R, and experience with data manipulation libraries (e.g., Pandas, NumPy). Strong understanding of statistical concepts and techniques, and experience applying them to real-world problems. Strong working experience in AWS. Strong programming skills in Python, and proficiency in deep learning frameworks such as TensorFlow, PyTorch, or JAX, as well as machine learning libraries such as scikit-learn. Familiarity with MLOps tools such as MLflow, Kubeflow, Airflow. Proficient experience with Generative AI frameworks such as GANs, VAEs, prompt engineering, and retrieval-augmented generation (RAG), and the ability to apply them to real-world problems. Hands-on skills in data engineering and building robust ML pipelines. Excellent communication skills and the ability to collaborate effectively with cross-functional teams. RESPONSIBILITIES: Writing and reviewing great quality code. Understanding the client’s business use cases and technical requirements and be able to convert them into technical design which elegantly meets the requirements. Mapping decisions with requirements and be able to translate the same to developers. Identifying different solutions and being able to narrow down the best option that meets the clients’ requirements. Defining guidelines and benchmarks for NFR considerations during project implementation Writing and reviewing design document explaining overall architecture, framework, and high-level design of the application for the developers Reviewing architecture and design on various aspects like extensibility, scalability, security, design patterns, user experience, NFRs, etc., and ensure that all relevant best practices are followed. Developing and designing the overall solution for defined functional and non-functional requirements; and defining technologies, patterns, and frameworks to materialize it Understanding and relating technology integration scenarios and applying these learnings in projects Resolving issues that are raised during code/review, through exhaustive systematic analysis of the root cause, and being able to justify the decision taken. Carrying out POCs to make sure that suggested design/technologies meet the requirements. Qualifications Bachelor’s or master’s degree in computer science, Information Technology, or a related field.
Posted 2 days ago
13.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Company Description 👋🏼 We're Nagarro. We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale across all devices and digital mediums, and our people exist everywhere in the world (17500+ experts across 39 countries, to be exact). Our work culture is dynamic and non-hierarchical. We are looking for great new colleagues. That's where you come in! Job Description REQUIREMENTS: Total experience 13+years. Strong working experience in machine learning, with a proven track record of delivering impactful solutions in NLP, machine vision, and AI. Proficiency in programming languages such as Python or R, and experience with data manipulation libraries (e.g., Pandas, NumPy). Strong understanding of statistical concepts and techniques, and experience applying them to real-world problems. Strong programming skills in Python, and proficiency in deep learning frameworks such as TensorFlow, PyTorch, or JAX, as well as machine learning libraries such as scikit-learn. Proficient experience with Generative AI frameworks such as GANs, VAEs, prompt engineering, and retrieval-augmented generation (RAG), and the ability to apply them to real-world problems. Exposure to graph-based models, knowledge graphs, and related query languages. Hands-on skills in data engineering and building robust ML pipelines. Experience with personalization engines and digital marketing platform integrations Architect scalable, cloud-based AI/ML solutions on AWS, Azure, or GCP, integrating data pipelines and AI services. Excellent communication skills and the ability to collaborate effectively with cross-functional teams. RESPONSIBILITIES: Writing and reviewing great quality code. Understanding the client’s business use cases and technical requirements and be able to convert them into technical design which elegantly meets the requirements. Mapping decisions with requirements and be able to translate the same to developers. Identifying different solutions and being able to narrow down the best option that meets the clients’ requirements. Defining guidelines and benchmarks for NFR considerations during project implementation. Writing and reviewing design document explaining overall architecture, framework, and high-level design of the application for the developers. Reviewing architecture and design on various aspects like extensibility, scalability, security, design patterns, user experience, NFRs, etc., and ensure that all relevant best practices are followed. Developing and designing the overall solution for defined functional and non-functional requirements; and defining technologies, patterns, and frameworks to materialize it Understanding and relating technology integration scenarios and applying these learnings in projects Resolving issues that are raised during code/review, through exhaustive systematic analysis of the root cause, and being able to justify the decision taken. Carrying out POCs to make sure that suggested design/technologies meet the requirements. Qualifications Bachelor’s or master’s degree in computer science, Information Technology, or a related field.
Posted 2 days ago
0.0 - 4.0 years
0 Lacs
haryana
On-site
As a Data Science Engineer Intern at V-Patrol AI, a dynamic and forward-thinking cybersecurity organization, you will play a crucial role in developing and implementing cutting-edge machine learning and deep learning models. Your primary focus will be on creating scalable data pipelines and generating valuable insights in real-time to counter cyber threats effectively. Your responsibilities will include designing and executing machine learning and deep learning models tailored for cybersecurity applications. Additionally, you will be involved in constructing and overseeing data pipelines for both structured and unstructured data sources such as network logs and threat feeds. Integrating APIs for model deployment and ensuring seamless real-time data flow will also be a key aspect of your role. Collaboration with software engineers, analysts, and stakeholders to support data-informed decision-making processes is essential. Monitoring model performance and optimizing them for production environments will be part of your routine tasks. Furthermore, you will be responsible for conveying your findings through informative dashboards, reports, and visualizations. To excel in this role, you should hold a Bachelor's or Master's degree in data science, computer science, statistics, or a related field. Proficiency in Python, pandas, scikit-learn, and TensorFlow/PyTorch is necessary. Hands-on experience with REST APIs, Fast API/Flask, and data preprocessing techniques is crucial. Familiarity with various ML/DL models like XGBoost, LSTMs, and Transformers is expected. Exposure to cloud platforms such as AWS/GCP, ETL tools, Docker/Kubernetes, etc., would be advantageous. While not mandatory, prior experience in cybersecurity, particularly in areas like threat detection and incident response, would be beneficial. In addition to the required skills and experience, expertise in adversarial machine learning and natural language processing (NLP) will be considered a significant advantage. Having a GitHub profile or a portfolio showcasing real-world projects in data science or cybersecurity will be a strong preference. This position is an internship opportunity that requires your presence at the designated work location.,
Posted 2 days ago
34.0 years
0 Lacs
Bengaluru East, Karnataka, India
On-site
Skills: Python, PyTorch, TensorFlow, Time Series Analysis, Predictive Modeling, SQL, Location: Onsite Bangalore, Karnataka Company: VectorStack Open Positions: 4 (2 Mid-Level with 34 years of experience, 2 Junior-Level with 12 years of experience) Employment Type: Full-time Joining: Immediate (Urgent Requirement) About VectorStack VectorStack is a technology solutions provider that drives digital transformation and enhances business performance. With domain expertise across Tech Advancement, Design Innovation, Product Evolution, and Business Transformation, we deliver tailored strategies that yield measurable results. We serve sectors like Retail Tech, Ad Tech, FinTech, and EdTech, helping businesses unlock their full potential. Learn more at www.vectorstack.co. Role Overview We are urgently seeking dynamic and skilled AI/ML Engineers to join our growing team in Bangalore. You will work on a variety of AI and machine learning projects, ranging from building intelligent systems to deploying scalable models in real-time applications. Responsibilities Develop, train, and deploy machine learning models for real-world applications Handle data preprocessing, feature engineering, and model optimisation Collaborate with cross-functional teams (Data Science, Engineering, Product) Maintain and monitor model performance in production environments Document model workflows, experiments, and outcomes clearly Requirements Mid-Level AI/ML Engineer (34 years experience): Strong proficiency in Python and ML libraries (scikit-learn, TensorFlow, PyTorch, XGBoost) Experience deploying ML models via APIs (Flask, FastAPI) Working knowledge of cloud platforms (AWS, Azure, or GCP) and CI/CD for ML (MLOps) Familiarity with version control (Git), Docker, and data pipeline frameworks Prior experience handling end-to-end AI/ML projects Junior AI/ML Engineer (12 Years Experience) Hands-on experience in building and training ML models Good understanding of data handling, model evaluation, and algorithms Familiar with Python, NumPy, pandas, scikit-learn Exposure to deep learning frameworks (TensorFlow or PyTorch) is a plus Willingness to learn and grow within a fast-paced, collaborative team Preferred Skills (Good To Have) Experience with Computer Vision, NLP, or Recommendation Systems Familiarity with tools like MLflow, Airflow, or Kubernetes Public GitHub or Kaggle profile showing relevant projects What We Offer Competitive compensation Opportunity to work on cutting-edge AI projects Career growth in a tech-driven environment Health and wellness benefits A collaborative team culture based out of Bangalore
Posted 2 days ago
34.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Skills: Python, PyTorch, TensorFlow, Time Series Analysis, Predictive Modeling, SQL, Location: Onsite Bangalore, Karnataka Company: VectorStack Open Positions: 4 (2 Mid-Level with 34 years of experience, 2 Junior-Level with 12 years of experience) Employment Type: Full-time Joining: Immediate (Urgent Requirement) About VectorStack VectorStack is a technology solutions provider that drives digital transformation and enhances business performance. With domain expertise across Tech Advancement, Design Innovation, Product Evolution, and Business Transformation, we deliver tailored strategies that yield measurable results. We serve sectors like Retail Tech, Ad Tech, FinTech, and EdTech, helping businesses unlock their full potential. Learn more at www.vectorstack.co. Role Overview We are urgently seeking dynamic and skilled AI/ML Engineers to join our growing team in Bangalore. You will work on a variety of AI and machine learning projects, ranging from building intelligent systems to deploying scalable models in real-time applications. Responsibilities Develop, train, and deploy machine learning models for real-world applications Handle data preprocessing, feature engineering, and model optimisation Collaborate with cross-functional teams (Data Science, Engineering, Product) Maintain and monitor model performance in production environments Document model workflows, experiments, and outcomes clearly Requirements Mid-Level AI/ML Engineer (34 years experience): Strong proficiency in Python and ML libraries (scikit-learn, TensorFlow, PyTorch, XGBoost) Experience deploying ML models via APIs (Flask, FastAPI) Working knowledge of cloud platforms (AWS, Azure, or GCP) and CI/CD for ML (MLOps) Familiarity with version control (Git), Docker, and data pipeline frameworks Prior experience handling end-to-end AI/ML projects Junior AI/ML Engineer (12 Years Experience) Hands-on experience in building and training ML models Good understanding of data handling, model evaluation, and algorithms Familiar with Python, NumPy, pandas, scikit-learn Exposure to deep learning frameworks (TensorFlow or PyTorch) is a plus Willingness to learn and grow within a fast-paced, collaborative team Preferred Skills (Good To Have) Experience with Computer Vision, NLP, or Recommendation Systems Familiarity with tools like MLflow, Airflow, or Kubernetes Public GitHub or Kaggle profile showing relevant projects What We Offer Competitive compensation Opportunity to work on cutting-edge AI projects Career growth in a tech-driven environment Health and wellness benefits A collaborative team culture based out of Bangalore
Posted 2 days ago
3.0 - 8.0 years
0 Lacs
chennai, tamil nadu
On-site
You are a Senior Data Science Analyst with 3 to 8 years of experience, based in Chennai. Your primary responsibility will involve building complex risk and financial models. It is crucial for you to possess analytical thinking, problem-solving skills, and effective communication abilities to engage with stakeholders at all levels. Experience in developing credit risk models (PD, EAD, LGD) and working on model implementation is considered advantageous. Your track record of navigating complex business issues and aligning diverse perspectives will be highly valued in this role. You should have a background of 3 to 8 years in Risk & Marketing or financial services, coupled with a degree in Computer Science, Mathematics, or Statistics. Proficiency in Machine Learning, Data Science, and Risk Analytics concepts is essential. Additionally, a strong grasp of statistical and quantitative techniques, as well as proficiency in tools like SAS, R, Python, SQL, and Excel, will be beneficial for this position. This is a permanent role, and the updated date for this job posting is 25-06-2025. The Job ID is J_3789, and the location is in Chennai, Tamil Nadu, India.,
Posted 2 days ago
9.0 - 13.0 years
0 Lacs
thiruvananthapuram, kerala
On-site
At EY, you'll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture, and technology to become the best version of you. And we're counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. We are looking to hire individuals with strong AI Enabled Automation skills who are interested in applying AI in the process automation space using technologies such as Azure, AI, ML, Deep Learning, NLP, GenAI, large Lang Models (LLM), RAG, Vector DB, Graph DB, and Python. Responsibilities: - Development and implementation of AI enabled automation solutions, ensuring alignment with business objectives. - Design and deploy Proof of Concepts (POCs) and Points of View (POVs) across various industry verticals to demonstrate the potential of AI enabled automation applications. - Ensure seamless integration of optimized solutions into the overall product or system. - Collaborate with cross-functional teams to understand requirements, integrate solutions into cloud environments (Azure, GCP, AWS, etc.), and ensure alignment with business goals and user needs. - Educate the team on best practices and keep updated on the latest tech advancements to bring innovative solutions to the project. Technical Skills Requirements: - 9 to 13 years of relevant professional experience. - Proficiency in Python and frameworks like PyTorch, TensorFlow, Hugging Face Transformers. - Strong foundation in ML algorithms, feature engineering, and model evaluation (Must). - Strong foundation in Deep Learning, Neural Networks, RNNs, CNNs, LSTMs, Transformers (BERT, GPT), and NLP (Must). - Experience in GenAI technologies such as LLMs (GPT, Claude, LLaMA), prompting, fine-tuning. - Experience with LangChain, LlamaIndex, LangGraph, AutoGen, or CrewAI (Agentic Framework). - Knowledge of retrieval augmented generation (RAG) and Knowledge Graph RAG. - Experience with multi-agent orchestration, memory, and tool integrations. - Experience/implement MLOps practices and tools (CI/CD for ML, containerization, orchestration, model versioning, and reproducibility) Good to have. - Experience with cloud platforms (AWS, Azure, GCP) for scalable ML model deployment. - Good understanding of data pipelines, APIs, and distributed systems. - Build observability into AI systems latency, drift, performance metrics. - Strong written and verbal communication, presentation, client service, and technical writing skills in English for both technical and business audiences. - Strong analytical, problem-solving, and critical thinking skills. - Ability to work under tight timelines for multiple project deliveries. What we offer: At EY GDS, we support you in achieving your unique potential both personally and professionally. We give you stretching and rewarding experiences that keep you motivated, working in an atmosphere of integrity and teaming with some of the world's most successful companies. While we encourage you to take personal responsibility for your career, we support you in your professional development in every way we can. You enjoy the flexibility to devote time to what matters to you, in your business and personal lives. At EY, you can be who you are and express your point of view, energy, and enthusiasm, wherever you are in the world. It's how you make a difference. EY | Building a better working world: EY exists to build a better working world, helping to create long-term value for clients, people, and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform, and operate. Working across assurance, consulting, law, strategy, tax, and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.,
Posted 2 days ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities ML Models development Deep Dive Analyses Data Exploration and Insights Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Undergraduate degree or equivalent experience Python Sql Cloud ML Algorithms Gen AI At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Posted 2 days ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Overview TekWissen is a global workforce management provider throughout India and many other countries in the world. The below clientis a global company with shared ideals and a deep sense of family. From our earliest days as a pioneer of modern transportation, we have sought to make the world a better place – one that benefits lives, communities and the planet Job Title: Specialty Development Practitioner Location: Chennai Work Type: Hybrid Position Description At the client's Credit Company, we are modernizing our enterprise data warehouse in Google Cloud to enhance data, analytics, and AI/ML capabilities, improve customer experience, ensure regulatory compliance, and boost operational efficiencies. As a GCP Data Engineer, you will integrate data from various sources into novel data products. You will build upon existing analytical data, including merging historical data from legacy platforms with data ingested from new platforms. You will also analyze and manipulate large datasets, activating data assets to enable enterprise platforms and analytics within GCP. You will design and implement the transformation and modernization on GCP, creating scalable data pipelines that land data from source applications, integrate into subject areas, and build data marts and products for analytics solutions. You will also conduct deep-dive analysis of Current State Receivables and Originations data in our data warehouse, performing impact analysis related to the client's Credit North America's modernization and providing implementation solutions. Moreover, you will partner closely with our AI, data science, and product teams, developing creative solutions that build the future for the client's Credit. Experience with large-scale solutions and operationalizing data warehouses, data lakes, and analytics platforms on Google Cloud Platform or other cloud environments is a must. We are looking for candidates with a broad set of analytical and technology skills across these areas and who can demonstrate an ability to design the right solutions with the appropriate combination of GCP and 3rd party technologies for deployment on Google Cloud Platform. Skills Required Big Query,, Data Flow, DataForm, Data Fusion, Dataproc, Cloud Composer, AIRFLOW, Cloud SQL, Compute Engine, Google Cloud Platform - Biq Query Experience Required GCP Data Engineer Certified Successfully designed and implemented data warehouses and ETL processes for over five years, delivering high-quality data solutions. 5+ years of complex SQL development experience 2+ experience with programming languages such as Python, Java, or Apache Beam. Experienced cloud engineer with 3+ years of GCP expertise, specializing in managing cloud infrastructure and applications into production-scale solutions. Big Query,, Data Flow, DataForm, Data Fusion, Dataproc, Cloud Composer, AIRFLOW, Cloud SQL, Compute Engine, Google Cloud Platform Biq Query, Data Flow, Dataproc, Data Fusion, TERRAFORM, Tekton,Cloud SQL, AIRFLOW, POSTGRES, Airflow PySpark, Python, API, cloudbuild, App Engine, Apache Kafka, Pub/Sub, AI/ML, Kubernetes Experience Preferred In-depth understanding of GCP's underlying architecture and hands-on experience of crucial GCP services, especially those related to data processing (Batch/Real Time) leveraging Terraform, Big Query, Dataflow, Pub/Sub, Data form, astronomer, Data Fusion, DataProc, Pyspark, Cloud Composer/Air Flow, Cloud SQL, Compute Engine, Cloud Functions, Cloud Run, Cloud build and App Engine, alongside and storage including Cloud Storage DevOps tools such as Tekton, GitHub, Terraform, Docker. Expert in designing, optimizing, and troubleshooting complex data pipelines. Experience developing with microservice architecture from container orchestration framework. Experience in designing pipelines and architectures for data processing. Passion and self-motivation to develop/experiment/implement state-of-the-art data engineering methods/techniques. Self-directed, work independently with minimal supervision, and adapts to ambiguous environments. Evidence of a proactive problem-solving mindset and willingness to take the initiative. Strong prioritization, collaboration & coordination skills, and ability to simplify and communicate complex ideas with cross-functional teams and all levels of management. Proven ability to juggle multiple responsibilities and competing demands while maintaining a high level of productivity. Data engineering or development experience gained in a regulated financial environment. Experience in coaching and mentoring Data Engineers Project management tools like Atlassian JIRA Experience working in an implementation team from concept to operations, providing deep technical subject matter expertise for successful deployment. Experience with data security, governance, and compliance best practices in the cloud. Experience with AI solutions or platforms that support AI solutions Experience using data science concepts on production datasets to generate insights Experience Range 5+ years Education Required Bachelor's Degree TekWissen® Group is an equal opportunity employer supporting workforce diversity.
Posted 2 days ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
We have an exciting and rewarding opportunity for you to impact your career and provide an adventure where you can push the limits of what's possible. Our team focuses on applying GenAI, ML, and statistical models to solve business problems in the Global Wealth Management space. As a Lead Software Engineer - Infrastructure Cloud at JPMorgan Chase within the Asset and Wealth Management Technology Team, you will collaborate with development teams to enhance the developer experience, delivering end-to-end cutting-edge solutions in the form of cloud-native microservices architecture applications. You will be involved in the design and architecture of solutions, focusing on the entire SDLC lifecycle stages. Our team works in tribes and squads, allowing you to move between projects based on your strengths and interests, making a significant impact on our clients and business partners worldwide. Job responsibilities Collaborate with development teams to enhance the developer experience, providing tools and infrastructure that support agile methodologies and continuous integration/continuous deployment. Execute software solutions, design, development, and technical troubleshooting with the ability to think beyond routine or conventional approaches. Create secure and high-quality production code to deploy infrastructure. Develop and maintain automated pipelines for model/product deployment, ensuring scalability, reliability, and efficiency. Produce architecture and design artifacts for complex applications, ensuring design constraints are met by software code development. Gather, analyze, synthesize, and develop visualizations and reporting from large, diverse sets for continuous improvement of software applications and systems. Identify hidden problems and patterns in data, using insights to drive improvements to coding hygiene and system architecture. Contribute to software engineering communities of practice and events that explore new and emerging technologies. Add to team culture of diversity, equity, inclusion, and respect. Required qualifications, capabilities, and skills Formal training or certification in software engineering concepts and 5+ years of applied experience. Hands-on practical experience in system design, application development, testing, and operational stability. Proficient with Public Cloud services in Production (AWS or other) with proficiency in Python scripting language. Experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages (Python) and database querying language. Experience with Infrastructure as Code (Terraform or other). Experience in applied AI/ML engineering, with a track record of deploying business critical GenAI, machine learning models in production. Demonstrated knowledge of software applications and technical processes within a technical discipline (e.g., cloud, mobile, etc.). Ability to tackle design and functionality problems independently. Solid understanding of agile methodologies such as CI/CD, Application Resiliency, and Security. Experience in architecting, supporting, and implementing advanced, strategic CICD migrations. Strong collaboration skills to work effectively with cross-functional teams, communicate complex concepts, and contribute to interdisciplinary projects. Preferred qualifications, capabilities, and skills Experience with any of GitHub, GitHub Actions, Artifactory, Terraform Cloud, Slack, Grafana, SonarQube is considered a plus. Experience in designing and implementing AI/ML/LLM/GenAI pipelines. Stay informed about the latest trends and advancements in the latest AI/ML/LLM/GenAI research, implement cutting-edge techniques, and leverage external APIs for enhanced functionality.,
Posted 2 days ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
As an Azure AI Engineer at Optimum Data Analytics, you will be a key member of our team in Pune, contributing to the design and deployment of cutting-edge AI/ML solutions using Azure cloud services. You should have a minimum of 5 years of hands-on experience in the field and hold either the Microsoft Azure AI Engineer Associate or Microsoft Azure Data Scientist Associate certification. Your role will involve collaborating with business stakeholders, data engineers, and product teams to deliver scalable and production-ready AI solutions that drive business value and growth. Key Responsibilities - Design and deploy AI/ML models using Azure AI/ML Studio, Azure Machine Learning, and Azure Cognitive Services. - Implement and manage data pipelines for model training workflows and ML lifecycle within the Azure ecosystem. - Work closely with business stakeholders to gather requirements, analyze data, and provide predictive insights. - Collaborate with data engineers and product teams to ensure the delivery of scalable and production-ready AI solutions. - Establish and maintain best practices for model monitoring, versioning, governance, and responsible AI practices. - Contribute to solution documentation and technical architecture. - Design AI agents using Azure stack tools such as Autogen, PromptFlow, ML Studio, AI Foundry, AI Search, LangChain, and LangGraph. Required Skills & Qualifications - Minimum of 5 years of hands-on experience in AI/ML, data science, or machine learning engineering. - Mandatory Certification: Microsoft Azure AI Engineer Associate OR Microsoft Azure Data Scientist Associate. - Strong knowledge of Azure services including Azure Machine Learning, Cognitive Services, Azure Functions, AI Foundry, and Azure Storage. - Proficiency in Python and experience with ML libraries such as scikit-learn, TensorFlow, PyTorch, or similar. - Solid understanding of data science lifecycle, model evaluation, and performance optimization. - Experience with version control tools like Git and deployment through CI/CD pipelines. - Excellent problem-solving and communication skills. - Strong experience in building AI Agents. - Familiarity with LLMs, prompt engineering, or GenAI tools (Azure OpenAI, Hugging Face, Vector Databases, etc.). Good To Have - Experience with Power BI or other data visualization tools. - Exposure to MLOps tools and practices. If you are passionate about leveraging AI/ML to drive business transformation and meet organizational objectives, we encourage you to apply for this position and be part of our dynamic team at Optimum Data Analytics.,
Posted 2 days ago
2.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Description Amazon is seeking a candidate to identify, develop and integrate innovative solutions and programs that lead to improvements that redefine the standards for customer experience in our North American Transportation Network. Amazon transportation encompasses all of the operations that deliver shipments from our fulfillment centers and third party locations to customers worldwide. TESS team in Hyderabad is looking for an innovative, hands-on and customer-obsessed BIE for it's Analytics team. This role requires an individual with excellent statistical and analytical abilities, strong attention to detail, deep knowledge of business intelligence solutions (like Microsoft Excel, SQL, Tableau, Redshift) as well as business acumen and ability to communicate clearly and collaborate with both business owners and other analytics teams. The candidate must be able to roll up his or her sleeves and work directly with the data. The ideal candidate should be endlessly curious, passionate about getting relevant insights from data, be a self-starter comfortable with ambiguity, and with an ability to work in a fast-paced and entrepreneurial environment. Primary Responsibilities Include Defining the problem and building analytical frameworks to help the operations to streamline the process Identifying gaps in the existing process by analyzing data and liaising with relevant team(s) to plug it and analyzing data and metrics and sharing update with the internal teams. Ability to translate business requirements into analysis, collect and analyze data, and make recommendations back to the business. The candidate will also continuously learn new systems, tools, and industry best practices to help design new studies and build new tools that help our team automate, and accelerate analytics. Leverage AI/ML skill to build forecasting models and As a BIE I on the Transportation Engineering Systems Team, you will own business stakeholder engagement, focus on delivering new, or streamlining existing processes and programs, and create resports/dashboards to support network connectivity,defect reduction, improvements in productivity, and shape mid to long term planning and prioritization for supported business groups. You’ll work cross functionally with a broad range of business stakeholders to define detailed requirements and implement process/system based solutions for the Amazon Fulfillment/Transportation Network. Additionally, this role will require to create operational structure and provide data driven insights for strategic direction in determining investment prioritization for our business analytics and tech partners. You will utilize a range of analytical tools and techniques to build business reports, generate dashboards, create metrics to measure success and integrate upcoming tech features AL/ML into the worklflow while driving process automation. Basic Qualifications 2+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with one or more industry analytics visualization tools (e.g. Excel, Tableau, QuickSight, MicroStrategy, PowerBI) and statistical methods (e.g. t-test, Chi-squared) Experience with scripting language (e.g., Python, Java, or R) Preferred Qualifications Master's degree, or Advanced technical degree Knowledge of data modeling and data pipeline design Experience with statistical analysis, co-relation analysis Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A3050062
Posted 2 days ago
0.0 - 31.0 years
1 - 3 Lacs
Vijaya Nagar, Bengaluru/Bangalore
On-site
Job Title: Full Stack Developer (Python/Flask) Company: AY Labs Location: [Insert Location or "Remote"] Experience Level: 0–2 Years Employment Type: Full-time About AY LabsAY Labs is a technology-driven company focused on building scalable and intelligent solutions. We work on cutting-edge projects in [briefly insert the domain—e.g., AI/ML, data platforms, SaaS—customize if known], offering a collaborative environment for young engineers to grow and innovate. Role OverviewWe’re looking for a Full Stack Developer to join our development team. You'll be working primarily with Python (Flask)on the backend and JavaScript on the frontend to build responsive, scalable web applications. Ideal for early-career developers eager to contribute to real-world projects and grow their technical skills. Key ResponsibilitiesDesign, develop, test, and deploy web applications using Flask and JavaScript. Implement and manage APIs, databases, and frontend interfaces. Work with deployment tools like Host Liger to manage production environments. Collaborate with cross-functional teams on new feature development and bug fixes. Write clean, maintainable code and contribute to code reviews. Stay updated with new tools, frameworks, and best practices. Requirements0–2 years of experience in full stack development. Strong understanding of Python and the Flask framework. Hands-on experience with JavaScript, HTML, and CSS. Familiarity with Host Liger deployment or similar hosting platforms. Understanding of RESTful APIs, version control (Git), and basic DevOps concepts. Good problem-solving skills and a willingness to learn. Nice to HaveKnowledge of modern JavaScript frameworks (e.g., React, Vue). Experience working with SQL or NoSQL databases. Familiarity with CI/CD pipelines and automated testing. What We OfferOpportunity to work on meaningful projects from day one. Supportive mentorship and skill development. Flexible work culture and remote-friendly setup. Exposure to a startup environment with real impact.
Posted 2 days ago
2.0 - 31.0 years
3 - 4 Lacs
Borivali East, Mumbai Metropolitan Region
On-site
Develop and deploy AI/ML models for business applications. Work on natural language processing (NLP), computer vision, and predictive analytics projects. Collaborate with data engineers and analysts to prepare datasets for training and testing models. Evaluate and fine-tune models for accuracy, efficiency, and scalability. Stay updated with the latest AI research and integrate new techniques into projects. Create AI-powered tools to automate repetitive tasks and improve productivity. Ensure ethical AI usage, following data privacy and security guidelines.
Posted 2 days ago
7.0 - 11.0 years
0 Lacs
noida, uttar pradesh
On-site
At EY, you'll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture, and technology to become the best version of you. And we're counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. We are seeking an experienced Oracle Application Solution Architect (Senior Manager) to lead the design, implementation, and optimization of Oracle Cloud and on-premises applications. The ideal candidate will have deep expertise in Oracle application suites across ERP, HCM, SCM, EPM, PPM, or CX and a strong understanding of enterprise business processes. This role requires strategic leadership, collaboration with senior stakeholders, technical teams, and end-users to drive successful enterprise-wide Oracle application deployments. Lead the end-to-end architecture of Oracle applications, ensuring alignment with business objectives and IT strategy. Oversee the implementation, configuration, and customization of Oracle Cloud and on-premises applications, including ERP, EPM, HCM, SCM, PPM, or CX modules. Define and oversee data migration strategies and integrations with third-party applications using Oracle Integration Cloud (OIC) and middleware solutions. Partner with C-suite executives, business leaders, IT teams, and external vendors to align technology strategies with business goals. Ensure system security, access controls, and compliance with regulatory requirements. Monitor system performance and provide recommendations for optimization and best practices. Provide strategic guidance and mentorship to implementation teams, architects, and senior consultants. Stay updated on the latest Oracle Cloud updates, industry trends, and best practices to drive innovation. Experience: 15+ years of experience in Oracle applications, with at least 7 years in Oracle Cloud solutions. Expertise in one or more of the Oracle application modules: ERP, HCM, SCM, or CX. Strong knowledge of Oracle Integration Cloud (OIC), Oracle PaaS, and middleware solutions. Hands-on experience with data migration, API-based integrations, and security configurations. Deep understanding of enterprise business processes in finance, HR, supply chain, or customer experience domains. Experience leading multi-country, multi-currency, and global Oracle application implementations. Strong problem-solving and analytical skills with the ability to troubleshoot complex issues. Excellent communication, leadership, and stakeholder management skills. Oracle Cloud certifications (preferred). Competencies / Skills: Experience in Agile methodologies and DevOps for Oracle Cloud implementations. Knowledge of emerging technologies such as AI, ML, and Blockchain in enterprise applications. Prior experience in industries such as manufacturing, retail, banking, or healthcare. Strong project planning, risk management, and stakeholder communication skills. Ability to manage complex projects with priorities while meeting deadlines and budgets. Advanced analytical thinking, with a focus on integrating data-driven insights and predictive models into financial planning processes. Experience in diagnosing and solving technical and process challenges efficiently. Excellent communication skills with the ability to present complex concepts to diverse audiences. Strong relationship-building skills to foster trust and credibility with clients and internal teams. Adaptability to evolving technologies, including Oracle Cloud updates and AI-driven solutions. Commitment to continuous improvement, learning, and innovation in enterprise performance management. Must possess a valid passport and be willing to travel for client site work (domestic and international). Education: Graduate from a reputed educational institution; MBA or equivalent preferred. Oracle certifications in EPBCS and Narrative Reporting. Additional certifications in project management (e.g., PMP, PRINCE, TOGAF, Agile) and AI/ML applications are a plus. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people, and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform, and operate. Working across assurance, consulting, law, strategy, tax, and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.,
Posted 2 days ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
🚀 We’re Hiring: AI/ML Engineer 🧑💻Looking for a passionate AI/ML Engineer (0–2 YOE) ready to solve real-world problems using cutting-edge ML & LLM technologies . 📌 What We Expect Strong in Python, ML/DL, and model training Good grasp of OOPs, SQL, and AI system design Experience with RAG, LangChain, LLMs is a big plus Take initiative, own delivery , and maintain clear and timely communication Be self-motivated, hungry to learn , and eager to prove themselves Collaborating w ith cross-functional teams to build intelligent automation systems for clients Be passionate about building and enhancing innovative AI solutions Fluent in English & able to handle client interactions Self learner, problem-solver, and career-focused Open to flexible hours & deadline-driven projects 🏆 Bonus if you’re from Tier - 1 / Tier -2 colleges and have strong portfolio 📩 Apply now: hr@austere.co.in 💬 DM if you’re ready to build impactful AI with us! #Hiring #AIJobs #MLJobs #Python #LangChain #RAG #LLM #SQL #OOPs
Posted 2 days ago
5.0 years
0 Lacs
Haryana, India
On-site
Job Description About TaskUs: TaskUs is a provider of outsourced digital services and next-generation customer experience to fast-growing technology companies, helping its clients represent, protect and grow their brands. Leveraging a cloud-based infrastructure, TaskUs serves clients in the fastest-growing sectors, including social media, e-commerce, gaming, streaming media, food delivery, ride-sharing, HiTech, FinTech, and HealthTech. The People First culture at TaskUs has enabled the company to expand its workforce to approximately 45,000 employees globally. Presently, we have a presence in twenty-three locations across twelve countries, which include the Philippines, India, and the United States. It started with one ridiculously good idea to create a different breed of Business Processing Outsourcing (BPO)! We at TaskUs understand that achieving growth for our partners requires a culture of constant motion, exploring new technologies, being ready to handle any challenge at a moment's notice, and mastering consistency in an ever-changing world. What We Offer: At TaskUs, we prioritize our employees' well-being by offering competitive industry salaries and comprehensive benefits packages. Our commitment to a People First culture is reflected in the various departments we have established, including Total Rewards, Wellness, HR, and Diversity. We take pride in our inclusive environment and positive impact on the community. Moreover, we actively encourage internal mobility and professional growth at all stages of an employee's career within TaskUs. Join our team today and experience firsthand our dedication to supporting People First. Job Description Summary Data Scientist with deep expertise in modern AI/ML technologies to join our innovative team. This role combines cutting-edge research in machine learning, deep learning, and generative AI with practical full-stack cloud development skills. You will be responsible for architecting and implementing end-to-end AI solutions, from data engineering pipelines to production-ready applications leveraging the latest in agentic AI and large language models. Job Description Key Responsibilities AI/ML Development & Research Design, develop, and deploy advanced machine learning and deep learning models for complex business problems Implement and optimize Large Language Models (LLMs) and Generative AI solutions Build agentic AI systems with autonomous decision-making capabilities Conduct research on emerging AI technologies and their practical applications Perform model evaluation, validation, and continuous improvement Cloud Infrastructure & Full-Stack Development Architect and implement scalable cloud-native ML/AI solutions on AWS, Azure, or GCP Develop full-stack applications integrating AI models with modern web technologies Build and maintain ML pipelines using cloud services (SageMaker, ML Engine, etc.) Implement CI/CD pipelines for ML model deployment and monitoring Design and optimize cloud infrastructure for high-performance computing workloads Data Engineering & Database Management Design and implement data pipelines for large-scale data processing Work with both SQL and NoSQL databases (PostgreSQL, MongoDB, Cassandra, etc.) Optimize database performance for ML workloads and real-time applications Implement data governance and quality assurance frameworks Handle streaming data processing and real-time analytics Leadership & Collaboration Mentor junior data scientists and guide technical decision-making Collaborate with cross-functional teams including product, engineering, and business stakeholders Present findings and recommendations to technical and non-technical audiences Lead proof-of-concept projects and innovation initiatives Required Qualifications Education & Experience Master's or PhD in Computer Science, Data Science, Statistics, Mathematics, or related field 5+ years of hands-on experience in data science and machine learning 3+ years of experience with deep learning frameworks and neural networks 2+ years of experience with cloud platforms and full-stack development Technical Skills - Core AI/ML Machine Learning: Scikit-learn, XGBoost, LightGBM, advanced ML algorithms Deep Learning: TensorFlow, PyTorch, Keras, CNN, RNN, LSTM, Transformers Large Language Models: GPT, BERT, T5, fine-tuning, prompt engineering Generative AI: Stable Diffusion, DALL-E, text-to-image, text generation Agentic AI: Multi-agent systems, reinforcement learning, autonomous agents Technical Skills - Development & Infrastructure Programming: Python (expert), R, Java/Scala, JavaScript/TypeScript Cloud Platforms: AWS (SageMaker, EC2, S3, Lambda), Azure ML, or Google Cloud AI Databases: SQL (PostgreSQL, MySQL), NoSQL (MongoDB, Cassandra, DynamoDB) Full-Stack Development: React/Vue.js, Node.js, FastAPI, Flask, Docker, Kubernetes MLOps: MLflow, Kubeflow, Model versioning, A/B testing frameworks Big Data: Spark, Hadoop, Kafka, streaming data processing Preferred Qualifications Experience with vector databases and embeddings (Pinecone, Weaviate, Chroma) Knowledge of LangChain, LlamaIndex, or similar LLM frameworks Experience with model compression and edge deployment Familiarity with distributed computing and parallel processing Experience with computer vision and NLP applications Knowledge of federated learning and privacy-preserving ML Experience with quantum machine learning Expertise in MLOps and production ML system design Key Competencies Technical Excellence Strong mathematical foundation in statistics, linear algebra, and optimization Ability to implement algorithms from research papers Experience with model interpretability and explainable AI Knowledge of ethical AI and bias detection/mitigation Problem-Solving & Innovation Strong analytical and critical thinking skills Ability to translate business requirements into technical solutions Creative approach to solving complex, ambiguous problems Experience with rapid prototyping and experimentation Communication & Leadership Excellent written and verbal communication skills Ability to explain complex technical concepts to diverse audiences Strong project management and organizational skills Experience mentoring and leading technical teams How We Partner To Protect You: TaskUs will neither solicit money from you during your application process nor require any form of payment in order to proceed with your application. Kindly ensure that you are always in communication with only authorized recruiters of TaskUs. DEI: In TaskUs we believe that innovation and higher performance are brought by people from all walks of life. We welcome applicants of different backgrounds, demographics, and circumstances. Inclusive and equitable practices are our responsibility as a business. TaskUs is committed to providing equal access to opportunities. If you need reasonable accommodations in any part of the hiring process, please let us know. We invite you to explore all TaskUs career opportunities and apply through the provided URL https://www.taskus.com/careers/ . TaskUs is proud to be an equal opportunity workplace and is an affirmative action employer. We celebrate and support diversity; we are committed to creating an inclusive environment for all employees. TaskUs people first culture thrives on it for the benefit of our employees, our clients, our services, and our community. Req Id: R_2507_10290_0 Posted At: Thu Jul 31 2025 00:00:00 GMT+0000 (Coordinated Universal Time)
Posted 2 days ago
0 years
0 Lacs
Hyderabad, Telangana, India
Remote
ARTIFICIAL INTELLIGENCE & MACHINE LEARNING INTERNSHIP - RESEARCH TO APPLICATION TRACK Role: Internship Associate (Unpaid) Location: Remote / Hybrid Duration: Project-based (with flexible engagement timelines) Industry: Artificial Intelligence | Deep Learning | Computer Vision | Generative AI We are hiring interns for a research-backed, industry-driven AI development program , focused on the practical implementation of Machine Learning, Natural Language Processing (NLP), Computer Vision, and Generative AI using Transformer models . Ideal for students pursuing graduation in Artificial Intelligence , Machine Learning , Computer Science , Data Science , or Deep Learning , this program offers guided exposure to both technical innovation and commercial feasibility through real-world project deployment. This is not a purely academic internship — it's a hands-on, solutions-oriented engagement , where research meets application. Candidates will contribute to AI tools, generative model engineering , vision-based systems , and intelligent application development under professional mentorship. Responsibilities Collaborate on applied AI development projects using ML, CV, NLP, and generative algorithms Assist in designing transformer-based architectures , data pipelines, and evaluation protocols Perform exploratory data analysis, model optimization, and multi-language prompt engineering Implement and iterate prototypes for computer vision systems and generative use-cases Participate in structured training sessions to enhance domain-specific capabilities Document technical progress, logic flow, and system behavior for internal validation Engage in cross-functional brainstorms addressing real-world technology use-cases Uphold clear and professional communication across documentation and team meetings Qualifications Students or fresh graduates with academic background in AI, ML, Deep Learning, NLP , or Computer Science Solid grounding in ML principles, with exposure to libraries like TensorFlow, PyTorch, OpenCV Working knowledge of programming in Python ; familiarity with MATLAB or C++ is beneficial Experience or interest in prompt engineering for Generative AI tools and LLMs Strong English communication skills — essential for task framing and prompt clarity Self-motivated with a problem-solving mindset and intellectual curiosity Comfortable working on remote platforms and version control tools like GitHub Previous academic or hobby projects in AI/ML will be an advantage Application Instructions To apply, send your resume and a brief statement of interest to: jobs.samvinmaya@gmail.com Subject line: Application – AI & ML Internship Program Applications will be considered on a rolling basis. Note If you have any questions or thoughts about the internship or project scope, feel free to include them in your email along with your resume.
Posted 2 days ago
6.0 years
0 Lacs
India
Remote
📍 Location: Remote (India preferred) 🧠 Experience: 6+ Years (Python + AI/ML with exposure to GenAI) Job Description: We are looking for a skilled and curious Python Developer with experience in Generative AI to join our growing team. You will work on cutting-edge AI solutions that involve LLMs, prompt engineering, fine-tuning models, and building scalable Python-based APIs and applications . This is an excellent opportunity to be at the forefront of GenAI product development, from prototype to production . Key Responsibilities: Design, develop, and maintain Python-based applications for AI-driven use cases. Build, test, and deploy LLM-powered applications using APIs (e.g., OpenAI, Anthropic, Cohere, etc.). Implement pipelines for prompt engineering, vector embeddings , and retrieval-augmented generation (RAG) . Collaborate with data scientists, product managers, and backend engineers to translate AI models into scalable products. Write clean, maintainable, and well-documented Python code. Develop REST APIs using frameworks like FastAPI or Flask . Optimize performance of AI workflows and inference pipelines. Stay updated on GenAI trends, open-source tools, and research advancements. Required Qualifications: 6+ years of hands-on development experience with Python . Proven experience building or integrating LLM-based applications (e.g., chatbots, document assistants, summarizers). Familiarity with OpenAI, Hugging Face, LangChain, or similar frameworks . Experience with vector databases (e.g., FAISS, Pinecone, Weaviate). Strong understanding of REST APIs, JSON, and microservice architecture . Good grasp of prompt engineering , token limits , and LLM design patterns . Experience with Git, CI/CD workflows, and agile teams. Nice-to-Have: Familiarity with cloud platforms (AWS, Azure, or GCP). Experience with Docker , Kubernetes , or deploying GenAI applications to production. Exposure to ML model fine-tuning or evaluation frameworks . Understanding of RAG pipelines or semantic search . Please apply directly at vishal.raj@therxcloud.com.
Posted 2 days ago
0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Qualifications Experience applying continuous integration/continuous delivery best practices, including Version Control, Trunk Based Development, Release Management, and Test-Driven Development Experience with popular MLOps tools (e.g., Domino Data Labs, Dataiku, mlflow, AzureML, Sagemaker), and frameworks (e.g.: TensorFlow, Keras, Theano, PyTorch, Caffe, etc.) Experience with LLM platforms (OpenAI, Bedrock, NVAIE) and frameworks (LangChain, LangFuse, vLLM, etc.) Experience in programming languages common to data science such as Python, SQL, etc. Understanding of LLMs, and supporting concepts (tokenization, guardrails, chunking, Retrieval Augmented Generation, etc.). Knowledge of ML lifecycle (wrangling data, model selection, model training, modeling validation and deployment at scale) and experience working with data scientists Familiar with at least one major cloud provider (Azure, AWS, GCP), including resource provisioning, connectivity, security, autoscaling, IaC. Familiar with cloud data warehousing solutions such as Snowflake, Fabric, etc. Experience with Agile and DevOps software development principles/methodologies and working on teams focused on delivering business value. Experience influencing and building mindshare convincingly with any audience. Confident and experienced in public speaking. Ability to communicate complex ideas in a concise way. Fluent with popular diagraming and presentation software. Demonstrated experience in teaching and/or mentoring professionals. Want to learn more about SC&E Check us out on our platform: http://www.wwt.com/consulting-services-careers
Posted 2 days ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Summary: We are seeking a Senior Python Developer with a strong background in backend development and a passion for designing and implementing efficient algorithms. The ideal candidate will be responsible for developing, maintaining, and optimizing our core backend systems and services, with a particular focus on complex algorithms .This role requires a deep understanding of Python, strong problem-solving skills, and the ability to work collaboratively in a fast-paced environment. You will play a key role in designing, developing, and maintaining robust data pipelines, APIs, and data processing workflows. You will work closely with data analysts and business teams to understand data requirements and deliver insightful data-driven solutions. The ideal candidate is passionate about data, enjoys problem-solving, and thrives in a collaborative environment. Experience in the financial or banking domain is a plus. Responsibilities: Design, develop, and maintain robust and scalable data pipelines using Python, SQL, PySpark, and streaming technologies like Kafka. Perform efficient data extraction, transformation, and loading (ETL) for large volumes of data from diverse data providers, ensuring data quality and integrity. Build and maintain RESTful APIs and microservices to support seamless data access and transformation workflows. Develop reusable components, libraries, and frameworks to automate data processing workflows, optimizing for performance and efficiency. Apply statistical analysis techniques to uncover trends, patterns, and actionable business insights from data. Implement comprehensive data quality checks and perform root cause analysis on data anomalies, ensuring data accuracy and reliability. Collaborate effectively with data analysts, business stakeholders, and other engineering teams to understand data requirements and translate them into technical solutions. Qualifications: Bachelor's or Master's degree in Computer Science, Data Science, Information Systems, or a related field. 5+ years of proven experience in Python development, with a strong focus on data handling, processing, and analysis. Extensive experience building and maintaining RESTful APIs and working with microservices architectures. Proficiency in building and managing data pipelines using APIs, ETL tools, and Kafka. Solid understanding and practical application of statistical analysis methods for business decision-making. Hands-on experience with PySpark for large-scale distributed data processing. Strong SQL skills for querying, manipulating, and optimizing relational database operations. Deep understanding of data cleaning, preprocessing, and validation techniques. Knowledge of data governance, security, and compliance standards is highly desirable. Experience in the financial services industry is a plus. Familiarity with basic machine learning (ML) concepts and experience preparing data for ML models is a plus. Strong analytical, debugging, problem-solving, and communication skills. Ability to work both independently and collaboratively within a team environment. Preferred Skills: Experience with CI/CD tools and Git-based version control. Experience in the financial or banking domain. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.
Posted 2 days ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description At Boeing, we innovate and collaborate to make the world a better place. We’re committed to fostering an environment for every teammate that’s welcoming, respectful and inclusive, with great opportunity for professional growth. Find your future with us. Overview As a leading global aerospace company, Boeing develops, manufactures and services commercial airplanes, defense products and space systems for customers in more than 150 countries. As a top U.S. exporter, the company leverages the talents of a global supplier base to advance economic opportunity, sustainability and community impact. Boeing’s team is committed to innovating for the future, leading with sustainability, and cultivating a culture based on the company’s core values of safety, quality and integrity. Technology for today and tomorrow The Boeing India Engineering & Technology Center (BIETC) is a 5500+ engineering workforce that contributes to global aerospace growth. Our engineers deliver cutting-edge R&D, innovation, and high-quality engineering work in global markets, and leverage new-age technologies such as AI/ML, IIoT, Cloud, Model-Based Engineering, and Additive Manufacturing, shaping the future of aerospace. People-driven culture At Boeing, we believe creativity and innovation thrives when every employee is trusted, empowered, and has the flexibility to choose, grow, learn, and explore. We offer variable arrangements depending upon business and customer needs, and professional pursuits that offer greater flexibility in the way our people work. We also believe that collaboration, frequent team engagements, and face-to-face meetings bring together different perspectives and thoughts – enabling every voice to be heard and every perspective to be respected. No matter where or how our teammates work, we are committed to positively shaping people’s careers and being thoughtful about employee wellbeing. With us, you can create and contribute to what matters most in your career, community, country, and world. Join us in powering the progress of global aerospace. Boeing India IT Product Systems team is currently looking for an Associate Software Developer - Java full stack to join them in their team in Bangalore, India. This role will be based out of Bangalore, India . Position Responsibilities: Understands and develops software solutions to meet end user requirements. Ensures that application integrates with overall system architecture, utilizing standard IT lifecycle methodologies and tools. Develops algorithms, data and process models, plans interfaces and writes interface control documents for use in construction of solutions of moderate complexity. Employer will not sponsor applicants for employment visa status. Basic Qualifications (Required Skills/Experience): 2+ years of relevant experience in IT industry Experience in designing and implementing idiomatic RESTful APIs using the Spring framework (v6.0+) with Spring Boot (v3.0+) and Spring Security (v6.0+) in Java (v17+). Experience with additional languages (Scala/Kotlin/others) preferred. Working experience with RDBM Systems, basic SQL scripting and querying, specifically with SQL Server (2018+) and Teradata (v17+). Additional knowledge of schema / modelling / querying optimization preferred. Experience with Typescript (v5+), JavaScript (ES6+), Angular (v15+), Material UI, AmCharts (v5+) Experience working with ALM tools (Git, Gradle, SonarQube, Coverity, Docker, Kubernetes) driven by tests (JUnit, Mockito, Hamcrest etc.) Experience in shell scripting (Bash/Sh), CI/CD processes and tools (GitLab CI/similar) OCI containers (Docker/Podman/Buildah etc.) Data analysis and engineering experience with Apache Spark (v3+) in Scala, Apache Iceberg / Parquet etc. Experience with Trino/Presto is a bonus. Familiarity with GCP / Azure (VMs, container runtimes, BLOB storage solutions) preferred but not mandatory. Preferred Qualifications (Desired Skills/Experience) : A Bachelor’s degree or higher is preferred Strong backend experience (Java/Scala/Kotlin etc.) with basic data analysis/engineering experience (Spark/Parquet etc.) OR basic backend experience (Java/Scala etc.) with strong data analysis/engineering experience (Spark/Parquet etc.) OR Moderate backend experience (Java/Kotlin etc.) with Strong Frontend experience (Angular 15+ with SASS / Angular Material) and exposure to DevOps pipelines (GitLab CI) Typical Education & Experience: Bachelor's Degree with typically 2 to 5 years of experience OR Master's Degree with typically 1 to 2 years of experience is preferred but not required Relocation: This position does offer relocation within INDIA. Applications for this position will be accepted until Aug. 09, 2025 Export Control Requirements: This is not an Export Control position. Relocation This position offers relocation based on candidate eligibility. Visa Sponsorship Employer will not sponsor applicants for employment visa status. Shift Not a Shift Worker (India) Equal Opportunity Employer: We are an equal opportunity employer. We do not accept unlawful discrimination in our recruitment or employment practices on any grounds including but not limited to; race, color, ethnicity, religion, national origin, gender, sexual orientation, gender identity, age, physical or mental disability, genetic factors, military and veteran status, or other characteristics covered by applicable law. We have teams in more than 65 countries, and each person plays a role in helping us become one of the world’s most innovative, diverse and inclusive companies. We are proud members of the Valuable 500 and welcome applications from candidates with disabilities. Applicants are encouraged to share with our recruitment team any accommodations required during the recruitment process. Accommodations may include but are not limited to: conducting interviews in accessible locations that accommodate mobility needs, encouraging candidates to bring and use any existing assistive technology such as screen readers and offering flexible interview formats such as virtual or phone interviews.
Posted 2 days ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
This role has been designed as ‘Hybrid’ with an expectation that you will work on average 2 days per week from an HPE office. Who We Are Hewlett Packard Enterprise is the global edge-to-cloud company advancing the way people live and work. We help companies connect, protect, analyze, and act on their data and applications wherever they live, from edge to cloud, so they can turn insights into outcomes at the speed required to thrive in today’s complex world. Our culture thrives on finding new and better ways to accelerate what’s next. We know varied backgrounds are valued and succeed here. We have the flexibility to manage our work and personal needs. We make bold moves, together, and are a force for good. If you are looking to stretch and grow your career our culture will embrace you. Open up opportunities with HPE. Job Description Job Family Definition: Designs, develops, troubleshoots and debugs software programs for software enhancements and new products. Develops software including operating systems, compilers, routers, networks, utilities, databases and Internet-related tools. Determines hardware compatibility and/or influences hardware design. Management Level Definition Contributes to assignments of limited scope by applying technical concepts and theoretical knowledge acquired through specialized training, education, or previous experience. Acts as team member by providing information, analysis and recommendations in support of team efforts. Exercises independent judgment within defined parameters. What You’ll Do Responsibilities: Design topologies and build network configurations that map well-optimized network reference designs Plan, develop and execute automated and manual test plans for the reference design readiness Provide constructive feedback, report issues, and interact with developers to deliver best in class product quality Review requirements from the Product Management, Technical Marketing & Account teams Utilize available network troubleshooting tools, including network packet captures, monitoring devices, log files, and customer inputs to facilitate effective issue resolution. Minimum Qualifications What you need to bring: BS degree in Computer Science or equivalent experience Years of experience – 5 to 7 yrs Expert knowledge on Layer 2 and Layer 3 technologies by either validating or deploying related networking products Deep understanding on Clos based Data Center networks architecture :3-stage and 5-stage and Data Center Interconnect (DCI) Excellent understanding of protocols used in Data Center networks such as ZTP, EVPN-VXLAN, BGP Proficient in Class of Service and DCQCN that gets heavily used in AI-ML Based Clos Networks Expert knowledge on Python programming Deep understanding of software, networking, and system concepts, including Linux internals, distributed system concepts and network troubleshooting tools Excellent interpersonal and communication skills with a proven ability to develop and maintain effective relationships. Strong problem solving and decision-making skills Good Communication skills Additional Skills Cloud Architectures, Cross Domain Knowledge, Design Thinking, Development Fundamentals, DevOps, Distributed Computing, Microservices Fluency, Full Stack Development, Security-First Mindset, Solutions Design, Testing & Automation, User Experience (UX) What We Can Offer You Health & Wellbeing We strive to provide our team members and their loved ones with a comprehensive suite of benefits that supports their physical, financial and emotional wellbeing. Personal & Professional Development We also invest in your career because the better you are, the better we all are. We have specific programs catered to helping you reach any career goals you have — whether you want to become a knowledge expert in your field or apply your skills to another division. Unconditional Inclusion We are unconditionally inclusive in the way we work and celebrate individual uniqueness. We know varied backgrounds are valued and succeed here. We have the flexibility to manage our work and personal needs. We make bold moves, together, and are a force for good. Let's Stay Connected Follow @HPECareers on Instagram to see the latest on people, culture and tech at HPE. #india #networking Job Engineering Job Level TCP_01 HPE is an Equal Employment Opportunity/ Veterans/Disabled/LGBT employer. We do not discriminate on the basis of race, gender, or any other protected category, and all decisions we make are made on the basis of qualifications, merit, and business need. Our goal is to be one global team that is representative of our customers, in an inclusive environment where we can continue to innovate and grow together. Please click here: Equal Employment Opportunity. Hewlett Packard Enterprise is EEO Protected Veteran/ Individual with Disabilities. HPE will comply with all applicable laws related to employer use of arrest and conviction records, including laws requiring employers to consider for employment qualified applicants with criminal histories.
Posted 2 days ago
4.0 years
0 Lacs
India
Remote
About Us Embrace Software, headquartered in Tampa, USA, is one of the fastest-growing software acquirers in the world. We focus on building niche software businesses that deliver mission-critical solutions across industries (Industrial, Healthcare, Fintech and Edtech). Why Join Embrace: Rapid Growth: Our team has expanded to over 300 members in just 4 years. Financial Strength: We’ve secured $130M in capital. Acquisitions: With 13 successful acquisitions to date, we’re operating in hyper-scale mode. Fortune 500 Impact: We serve 16% of Fortune 500 companies. Proven Leadership: Our CEO/Founder has a track record of creating over $2B in value through his ventures (prior ventures include being a founding member and Chief Strategist at Valsoft, as well as an early lead investor and Board member at VitalHub (TSX: VHI)) Join us as we lay the groundwork for exponential growth over the next 5 years. If you thrive in a fast-paced environment and share our vision, we’d love to have you on board! Job Description This is a remote position. Embrace Software is seeking a nimble, analytically driven Product Specialist to support portfolio companies that lack dedicated product leadership. You will identify competitive insights and emerging opportunities—particularly involving AI—and co-develop actionable product roadmaps alongside CEOs and CTOs across the portfolio. Success will be measured by the velocity, strategic clarity, and business impact of roadmap initiatives launched across 13+ operating companies. Key Responsibilities Competitive & Market Intelligence Conduct ongoing scans of the market landscape, including size, pricing trends, and competitor moves Identify emerging opportunities using AI-driven analysis and trend tracking Perform win/loss analysis to support data-driven roadmap decisions Deliver quarterly “Opportunity Radar” decks and rapid briefs for new acquisitions Portfolio Product Roadmapping Facilitate structured 6-week product roadmap sprint sessions with portfolio companies Align product initiatives with metrics such as revenue retention, upsell potential, and cost-to-serve Deliver signed, KPI-linked roadmaps with RACIs and 12-month delivery calendars Executive Advisory Coach founders and GMs on product prioritization, positioning, and long-term strategy Present executive recommendations to the Embrace Investment Group Prepare board-ready product strategy memos and frameworks M&A Diligence Support Evaluate product maturity, scalability, and technical debt of acquisition targets Assess value creation potential through modernization or AI infusion Produce concise Red/Yellow/Green diligence scorecards Operating Rhythm & Enablement Design reusable product frameworks, templates, and KPI dashboards Maintain a self-serve “Product Playbook” wiki to standardize and scale product best practices Requirements Experience 5–10 years in product management, product strategy, or competitive intelligence within B2B/SaaS organizations (vertical market software is a plus) Proven ability to influence C-level stakeholders without formal authority Hands-on experience with AI/ML-enabled features or data product strategy Skills Strong problem-solving abilities with a structured, hypothesis-driven approach (consulting toolkit welcome) Proficient in Excel/Google Sheets, SQL, and BI tools such as Power BI or Tableau Familiarity with LLMs and AI product development Skilled in product roadmapping tools (e.g., Aha!, Productboard) Ability to translate technical trends into strategic product decisions Soft Traits Agile and entrepreneurial; comfortable context-switching across industries and business models Clear communicator—able to simplify complex concepts for executive consumption Humble and collaborative; builds trust and credibility quickly within cross-functional teams Preferred Qualifications MBA, MS in Analytics/Computer Science, or equivalent practical experience Experience with SaaS pricing and packaging strategy Exposure to private equity roll-ups, portfolio operations, or post-merger integration Familiarity with product due diligence in M&A contexts Benefits Competitive salary, structured based on UK working hours. Comprehensive training and mentorship programs for skill and knowledge enhancement. Opportunities for career advancement and professional development. Experience collaborating with a diverse, global team within a remote work setting.
Posted 2 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39815 Jobs | Dublin
Wipro
19317 Jobs | Bengaluru
Accenture in India
15105 Jobs | Dublin 2
EY
14860 Jobs | London
Uplers
11139 Jobs | Ahmedabad
Amazon
10431 Jobs | Seattle,WA
IBM
9214 Jobs | Armonk
Oracle
9174 Jobs | Redwood City
Accenture services Pvt Ltd
7676 Jobs |
Capgemini
7672 Jobs | Paris,France