Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0.0 - 4.0 years
0 Lacs
karnataka
On-site
As an AI/ML Intern at Varaha, you will be part of a dedicated team of researchers and engineers focused on developing machine learning and deep learning solutions to support sustainable agricultural practices and climate action. This role will provide you with practical exposure to working with diverse datasets and scalable AI systems that aim to make a global impact. Your responsibilities will include designing and implementing machine learning and deep learning models using Python. You will also be involved in data cleaning, preprocessing, and analysis of both structured and unstructured datasets. Additionally, you will have the opportunity to train and assess models using popular libraries such as PyTorch or TensorFlow. Your contributions will extend to internal research, model benchmarking, and collaborative experimentation. It will be essential to document your experiments, code, and results to ensure reproducibility and facilitate collaboration within the team. To excel in this role, you should possess a strong foundation in Python programming and a solid understanding of fundamental machine learning algorithms and concepts such as regression and classification. Familiarity with deep learning techniques and neural networks is also required, along with experience using ML/DL libraries and tools like scikit-learn, pandas, NumPy, PyTorch, or TensorFlow. In terms of qualifications, you should be currently pursuing or have recently completed a B.Tech in Computer Science or a related field from IITs exclusively. Alternatively, you could hold a Master's degree in Computer Science, Data Science, AI, or related disciplines from a recognized institution. Candidates with additional experience in ML competitions or contributions to ML research will be preferred. An interest or background in areas like Natural Language Processing (NLP), Computer Vision, or MLOps will also be advantageous. Exposure to climate or geospatial datasets will be considered a plus. By joining Varaha, you will become part of a dynamic, international team that values ambition and innovation. You can expect mentorship from a diverse, impact-oriented team and the chance to work on real-world AI systems that address climate change. The work environment is characterized by collaboration and creativity, providing you with a platform to grow and contribute meaningfully. Please note that this job description may not encompass all the activities, duties, or responsibilities associated with the role. Responsibilities and tasks may be subject to change at any time, with or without prior notice.,
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
uttar pradesh
On-site
We are seeking an enthusiastic and skilled Python Developer with a passion for AI-based application development to join our expanding technology team. This role presents an opportunity to work at the intersection of software engineering and data analytics, contributing to cutting-edge AI-driven solutions that have a real business impact. If you possess a strong foundation in Python, a knack for problem-solving, and a keen interest in building intelligent systems, we are excited to meet you! As a Python Developer at ARDEM Data Services Private Limited, your key responsibilities will include: - Developing and deploying AI-focused applications using Python and associated frameworks. - Collaborating with Developers, Product Owners, and Business Analysts to design and implement machine learning pipelines. - Creating interactive dashboards and data visualizations to derive actionable insights. - Automating data collection, transformation, and processing tasks. - Utilizing SQL for data extraction, manipulation, and database management. - Applying statistical methods and algorithms to extract insights from large datasets. The ideal candidate should have: - 2-3 years of experience as a Python Developer, along with a robust portfolio of relevant projects. - A Bachelor's degree in Computer Science, Data Science, or a related technical field. - In-depth knowledge of Python, including frameworks and libraries such as NumPy, Pandas, SciPy, and PyTorch. - Proficiency in front-end technologies like HTML, CSS, and JavaScript. - Familiarity with SQL and NoSQL databases and their best practices. - Excellent communication and team-building skills. - Strong problem-solving abilities with a focus on innovation and self-learning. - Knowledge of cloud platforms such as AWS is a plus. Technical Requirements: - Laptop or Desktop: Windows (i5 or higher, 8GB RAM minimum) - Screen: 14 inches, Full HD (1920x1080) - Internet Speed: 100 Mbps or higher About ARDEM: ARDEM is a prominent Business Process Outsourcing and Business Process Automation service provider with a successful track record of over two decades. We deliver outsourcing and automation services to clients in the USA and Canada, focusing on continuous innovation and excellence. We are committed to becoming the leading Business Process Outsourcing and Business Process Automation company by consistently delivering top-notch services to our customers. Please note: ARDEM will never request personal or banking information during the hiring process for any data entry/processing roles. Any communication claiming to offer work-from-home jobs on behalf of ARDEM Incorporated is fraudulent. Please disregard such messages and refer to ARDEM's Careers page for genuine job opportunities. We apologize for any inconvenience caused by such deceptive practices.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
DecisionX is pioneering a new category with the world's first Decision AI, an AI Super-Agent that assists high-growth teams in making smarter, faster decisions by transforming fragmented data into clear next steps. Whether it involves strategic decisions in the boardroom or operational decisions across various departments like Sales, Marketing, Product, and Engineering, down to the minutiae that drives daily operations, Decision AI serves as your invisible co-pilot, thinking alongside you, acting ahead of you, and evolving beyond you. We are seeking a dedicated and hands-on AI Engineer to join our Founding team. In this role, you will collaborate closely with leading AI experts to develop the intelligence layer of our exclusive "Agentic Number System." Key Responsibilities - Building, fine-tuning, and deploying AI/ML models for tasks such as segmentation, scoring, recommendation, and orchestration. - Developing and optimizing agent workflows using LLMs (OpenAI, Claude, Mistral, etc.) for contextual reasoning and task execution. - Creating vector-based memory systems utilizing tools like FAISS, Chroma, or Weaviate. - Working with APIs and connectors to incorporate third-party data sources (e.g., Salesforce, HubSpot, GSuite, Snowflake). - Designing pipelines that transform structured and unstructured signals into actionable insights. - Collaborating with GTM and product teams to define practical AI agent use cases. - Staying informed about the latest developments in LLMs, retrieval-augmented generation (RAG), and agent orchestration frameworks (e.g., CrewAI, AutoGen, LangGraph). Must Have Skills - 5-8 years of experience in AI/ML engineering or applied data science. - Proficient programming skills in Python, with expertise in LangChain, Pandas, NumPy, and Scikit-learn. - Experience with LLMs (OpenAI, Anthropic, etc.), prompt engineering, and RAG pipelines. - Familiarity with vector stores, embeddings, and semantic search. - Expertise in data wrangling, feature engineering, and model deployment. - Knowledge of MLOps tools such as MLflow, Weights & Biases, or equivalent. What you will get - Opportunity to shape the AI architecture of a high-ambition startup. - Close collaboration with a visionary founder and experienced product team. - Ownership, autonomy, and the thrill of building something from 0 to 1. - Early team equity and a fast growth trajectory.,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
haryana
On-site
As a Backend Developer, your primary responsibility will be to develop and maintain backend services using Python, with a strong emphasis on FastAPI, NumPy, and Polars for efficient handling of data. You will be tasked with building and managing agentic workflows utilizing LangChain, LangGraph, and MCP Agents to facilitate dynamic, multi-step reasoning. Additionally, you will design and execute RAG pipelines for contextual information retrieval and response generation. Integration and optimization of MongoDB or similar vector databases like FAISS and Pinecone for semantic search and embedding storage will also fall under your purview. Collaboration with cross-functional teams to deploy scalable AI services in production will be a crucial part of your role. Furthermore, you will be responsible for conducting performance tuning, testing, and deployment of AI components, while also keeping abreast of the latest developments in GenAI, LLMs, and agentic architectures. The ideal candidate for this position should possess a minimum of 3-6 years of experience in backend development using Python. You should have hands-on experience with FastAPI for constructing RESTful APIs, as well as proficiency in NumPy and Polars for numerical and tabular data processing. A solid understanding of Generative AI concepts is essential, along with practical experience in working with LangChain, LangGraph, and MCP Agents. Experience in building and deploying agentic RAG systems and familiarity with MongoDB or other vector databases for semantic search and retrieval will be advantageous. Knowledge of cloud platforms such as Azure and containerization tools like Docker/Kubernetes will be considered a plus. To qualify for this role, you should hold a Bachelors or Masters degree in computer science, data science, mathematics, or a related field.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
As a PL SQL Developer with 5 to 7 years of experience, you will be based in Pune/hybrid with an immediate to 15 days notice period. You must possess expertise in languages such as SQL, T-SQL, PL/SQL, and Python libraries like PySpark, Pandas, NumPy, Matplotlib, and Seaborn, along with databases like SQL Server and Synapse. Your key responsibilities will include designing and maintaining efficient data pipelines and ETL processes using SQL and Python, writing optimized queries for data manipulation, using Python libraries for data processing and visualization, performing EOD data aggregation and reporting, working on Azure Synapse Analytics for scalable data transformations, monitoring and managing database performance, collaborating with cross-functional teams, ensuring secure data handling and compliance with organizational policies, and debugging Unix-based scripts. To be successful in this role, you should have a Bachelors/Masters degree in Computer Science, IT, or a related field, along with 5-8 years of hands-on experience in data engineering and analytics. You must have a solid understanding of database architecture, experience in end-of-day reporting setups, and familiarity with cloud-based analytics platforms. This is a full-time, permanent position with day shift schedule and in-person work location.,
Posted 1 week ago
0.0 - 3.0 years
0 Lacs
bhubaneswar
On-site
As an AI/ML Trainer at our institution in Bhubaneshwar, you will be responsible for delivering comprehensive lectures on Artificial Intelligence, Machine Learning, Deep Learning, and Data Science to students enrolled in B.Tech, MCA, and M.Sc-IT courses. Your role will involve equipping students with both theoretical knowledge and practical skills essential for industry readiness. Your key responsibilities will include delivering engaging lectures on topics such as Machine Learning Algorithms, Supervised/Unsupervised Learning, Neural Networks, Computer Vision, Natural Language Processing (NLP), and AI model deployment. You will ensure that students gain conceptual clarity and hands-on experience using tools like Python, TensorFlow, Keras, and Scikit-learn. Additionally, you will be expected to continuously update teaching content to align with industry trends and technological advancements. Apart from teaching, you will also be involved in conducting applied research in AI/ML, staying updated with the latest advancements in the field, and supporting students in research and innovation projects. Providing mentorship to students working on AI/ML internships, final year projects, or competitions will be an integral part of your role. Collaboration is key in this position, as you will collaborate with academic peers and industry experts to enhance the AI/ML curriculum and establish connections with AI/ML startups or companies for student internships and collaborative projects. Professional development is encouraged, and you will have the opportunity to participate in workshops, webinars, and conferences to stay abreast of the evolving AI landscape. To qualify for this role, you should possess a Masters degree in Artificial Intelligence, Computer Science, Data Science, or a related field, along with 6 months to 1 year of experience. A strong background in Machine Learning, Deep Learning, and proficiency in Python-based AI tools is essential. Prior teaching or corporate training experience would be highly beneficial. The ideal candidate will demonstrate expertise in AI/ML frameworks such as TensorFlow, Keras, PyTorch, and Scikit-learn, along with a strong command over Python, NumPy, Pandas, and Jupyter Notebooks. Excellent communication and presentation skills, analytical thinking, and problem-solving abilities are key competencies required for this role. If you have a passion for teaching and mentoring future AI professionals, we encourage you to apply and be a part of our dynamic and innovative educational environment where you can shape the future of AI/ML professionals while engaging in real-world problem-solving and research-driven teaching. This position offers a competitive salary and benefits package, providing you with the opportunity to make a meaningful impact in the field of AI/ML.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
ahmedabad, gujarat
On-site
We are seeking a Senior LLM Engineer to actively contribute to the development of cutting-edge GenAI applications. Your responsibilities will include building GenAI applications that align with business use cases, designing and optimizing prompts for multi-modal models, testing GenAI solutions to ensure they meet requirements and implementing guardrails to prevent unintended or harmful content generation. Additionally, you will be involved in designing conversational or RAG systems. To be successful in this role, you should have a Bachelor's degree in Engineering or related fields with at least 3 years of experience in ML/LLMs. You should possess knowledge of prompt engineering techniques such as chain of thoughts, tree of thoughts, guardrails, zero-shot or few-shot prompting. Familiarity with open-source and proprietary LLMs like GPT, Claude, LLaMA, and Mistral is required. Proficiency in Python programming and experience with libraries like numpy and pandas are essential. Hands-on experience with frameworks such as Langchain, LlamaIndex, Huggingface, or similar tools is preferred. Knowledge of RAG and experience with self-hosted/cloud-based vector databases is a plus. The ideal candidate will have a proven ability to work effectively in a cross-discipline environment within defined timelines. Experience with cloud-based systems, particularly GCP or AWS, is highly desirable. Strong analytical, problem-solving, written, and verbal communication skills are also key requirements. This position is based in Ahmedabad or Pune.,
Posted 1 week ago
5.0 years
0 Lacs
New Delhi, Delhi, India
Remote
About the Role: Baoiam is looking for a passionate and experienced Data Science Trainer to join our team full-time. As a key mentor and subject matter expert, you’ll be responsible for delivering engaging, practical, and outcome-driven sessions on Data Analysis and Advanced Data Science to students enrolled in our flagship program. The position is hybrid , requiring a mix of remote delivery and periodic in-person sessions. Key Responsibilities: Deliver live training sessions and workshops (online/offline) for learners across India. Design and update curriculum aligned with industry trends and job roles. Create high-quality learning content including assignments, case studies, and projects. Mentor and resolve doubts of students on a regular basis. Track student performance and provide personalized feedback. Collaborate with the product and content team to enhance the course experience. Participate in webinars, demos, and onboarding sessions as the face of the program . Support the career team by conducting mock interviews and project evaluations. Required Skills & Experience: Minimum 5+ years of experience in the Data Science field, with real-world project exposure. Proven experience in online teaching/training (edtech background preferred). Proficient in Python, Pandas, NumPy, Scikit-Learn, Seaborn, Matplotlib, Power BI / Tableau. Deep understanding of Data Analysis, Machine Learning, and Business Applications . Strong communication and presentation skills. Ability to explain complex topics in a beginner-friendly manner. Experience with LMS platforms and student engagement tools is a plus. What We Offer: Fixed salary of ₹4.5 LPA Flexible working (hybrid mode) with periodic offline events Opportunity to build your personal brand and reach thousands of learners Be part of an impactful mission in democratizing education To Apply: Email your resume, portfolio (if any), and a short video/audio introducing yourself to hr@baoiam.com Subject Line: Application – Data Science Trainer (Hybrid)
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
The Data Scientist is expected to harness large volumes of complex data to generate actionable insights that fuel business strategy and innovation. This role requires expertise in advanced statistical methods, machine learning techniques, and data visualization to discover patterns, forecast outcomes, and support data-driven decision-making across the organization. Transforming complex datasets into solutions is a key aspect of the role, leveraging statistical expertise and machine learning to answer crucial business questions. This includes collecting, cleaning, and preprocessing structured and unstructured data from various sources such as databases, APIs, logs, and documents. Investigating data sets to identify patterns, anomalies, trends, and correlations through Exploratory Data Analysis (EDA) is essential for informing hypotheses and model development. Designing, implementing, and fine-tuning statistical and machine learning models for prediction, classification, segmentation, and recommendation tasks is a core responsibility. Evaluating model performance using appropriate metrics and iterating to improve robustness and generalizability is crucial. Translating complex model outputs into intuitive visualizations and business-relevant narratives using tools like Power BI, Tableau, or custom dashboards is important for effective data storytelling and visualization. Presenting findings and actionable recommendations to stakeholders in a clear and compelling manner to enable informed decision-making across departments is a key part of the role. Collaborating closely with business, product, engineering, and operations teams to understand domain-specific challenges and deliver tailored data-driven solutions is essential. Staying updated on the latest developments in data science, AI/ML frameworks, and big data technologies to incorporate cutting-edge methods into workflows is also a required aspect. The ideal candidate should possess deep proficiency in Python or R, along with fluency in SQL and familiarity with core data manipulation libraries such as NumPy, Pandas, and Scikit-learn. The ability to translate complex data outputs into clear, concise, and visually compelling insights for technical and non-technical stakeholders is necessary. A solid grasp of business fundamentals to connect analytical findings with strategic decision-making is crucial, along with having a naturally analytical mindset and a passion for uncovering insights, solving problems, and discovering meaningful patterns within data. Educational qualifications for this role include an advanced degree (Masters or PhD) in a quantitative discipline such as Statistics, Mathematics, Computer Science, or a closely related field. The ideal candidate should have 3+ years of demonstrated experience in data science, showcasing a strong command of statistical modeling and machine learning techniques. The Data Scientist role falls under the Software Division and reports to the CTO Org.,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
maharashtra
On-site
You should have experience in BI Product Development, specifically with Apache Superset. Hands-on experience with other BI tools like Power BI or Tableau is acceptable if you have strong project-based BI development exposure. It is essential that you have worked on multiple BI projects end-to-end. Advanced proficiency in Python and SQL is a must. You should have used Python for product development or in complex analytics projects. Additionally, you must demonstrate advanced SQL knowledge, such as optimizing queries, working with large datasets, and utilizing stored procedures. Be prepared to provide specific use cases where your Python and SQL expertise was critical. You should have hands-on experience in advanced analytics, including Fraud Analytics, Prediction/Forecasting models, and statistical or machine learning techniques. Familiarity with Pandas, NumPy, Scikit-learn, or similar Python libraries is a plus. Experience in client management is necessary, as you will be expected to manage BI projects for external or internal clients. You should be capable of understanding client requirements, presenting solutions, and effectively managing delivery timelines and stakeholder expectations.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
noida, uttar pradesh
On-site
You are a skilled and motivated AI Developer with over 3+ years of hands-on experience in building, deploying, and optimizing AI/ML models. Your expertise includes strong proficiency in Python, Scikit-learn, machine learning algorithms, and practical experience of Azure AI services, Azure AI foundry, Copilot Studio, and Dataverse are mandatory. You will be responsible for designing intelligent solutions using modern deep learning and neural network architectures, integrated into scalable cloud-based environments. Your key responsibilities will include utilizing Azure AI Foundry and Copilot Studio to build AI-driven solutions that can be embedded within enterprise workflows. You will design, develop, and implement AI/ML models using Python, Scikit-learn, and modern deep learning frameworks. Additionally, you will build and optimize predictive models using structured and unstructured data from data lakes and other enterprise sources. Collaborating with data engineers to process and transform data pipelines across Azure-based environments, you will develop and integrate applications with Microsoft Dataverse for intelligent business process automation. Applying best practices in data structures and algorithm design, you will ensure high performance and scalability of AI applications. Your role will involve training, testing, and deploying machine learning, deep learning, and neural network models in production environments. Furthermore, you will be responsible for ensuring model governance, performance monitoring, and continuous learning using Azure MLOps pipelines. Collaborating cross-functionally with data scientists, product teams, and cloud architects, you will drive AI innovation within the organization. As a qualified candidate, you hold a Bachelors or Masters degree in Computer Science, Data Science, AI/ML, or a related field. With 3+ years of hands-on experience in AI/ML development, you possess practical experience with Copilot Studio and Microsoft Dataverse integrations. Expertise in Microsoft Azure is essential, particularly with services such as Azure Machine Learning, Azure Data Lake, and Azure AI Foundry. Proficiency in Python and machine learning libraries like Scikit-learn, Pandas, and NumPy is required. A solid understanding of data structures, algorithms, and object-oriented programming is essential, along with experience in data lakes, data pipelines, and large-scale data processing. Your deep understanding of neural networks, deep learning frameworks (e.g., TensorFlow, PyTorch), and model tuning will be valuable in this role. Familiarity with MLOps practices and lifecycle management on cloud platforms is beneficial. Strong problem-solving abilities, communication skills, and team collaboration are important attributes for this position. Preferred qualifications include Azure AI or Data Engineering certification, experience in deploying AI-powered applications in enterprise or SaaS environments, knowledge of generative AI or large language models (LLMs), and exposure to REST APIs, CI/CD pipelines, and version control systems like Git.,
Posted 1 week ago
2.0 - 6.0 years
0 - 0 Lacs
punjab
On-site
You are looking for an experienced Python Developer to join a dynamic development team. With 2 to 5 years of experience in building scalable backend applications and APIs using modern Python frameworks, you will play a crucial role in developing robust and high-performance solutions. Collaboration with design, frontend, and DevOps teams is essential for successful project delivery. Your key responsibilities will include developing, testing, and maintaining backend applications using Django, Flask, or FastAPI. You will build RESTful APIs, integrate third-party services, and utilize data handling libraries like Pandas and NumPy for efficient data processing. Writing clean, maintainable, and well-documented code that follows industry best practices is a must. Additionally, participation in code reviews, mentoring junior developers, and troubleshooting production issues are part of your role. To excel in this position, you must have 2 to 5 years of backend development experience with Python. Proficiency in core and advanced Python concepts, strong command over at least one Python framework, experience with data libraries, and familiarity with version control systems are required. Comfort working in Linux environments, expertise in backend Python development, strong debugging skills, and experience with API development and microservices architecture are essential skills. Desirable skills include experience with Generative AI frameworks, exposure to Machine Learning libraries, knowledge of containerization tools, familiarity with web servers, understanding of asynchronous programming, and Agile practices. Exposure to CI/CD pipelines and cloud platforms will be beneficial. The company specializes in delivering cutting-edge solutions in custom software, web, and AI development. The work culture emphasizes a unique blend of in-office and remote collaboration, prioritizing employees above all else. Continuous learning, leadership opportunities, and mutual respect are encouraged, fostering an environment where individuals are valued and supported in achieving their fullest potential. As part of the benefits and perks, you can expect a competitive salary of up to 6-10 LPA based on skills and experience, generous time off with 18 annual holidays for work-life balance, extensive learning opportunities, and valuable client exposure to enhance your professional growth.,
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
noida, uttar pradesh
On-site
At EY, you'll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture, and technology to become the best version of you. And we're counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Our Technology team builds innovative digital solutions rapidly and at scale to deliver the next generation of Financial and Non-Financial services across the globe. The Position is a senior technical, hands-on delivery role, requiring knowledge of data engineering, cloud infrastructure and platform engineering, platform operations, and production support using ground-breaking cloud and big data technologies. The ideal candidate with 8-10 years of relevant experience will possess strong technical skills, an eagerness to learn, a keen interest in the three key pillars that our team supports i.e. Financial Crime, Financial Risk, and Compliance technology transformation. The ability to work collaboratively in a fast-paced environment, and an aptitude for picking up new tools and techniques on the job, building on existing skillsets as a foundation. In this role you will: - Develop, maintain, and optimize backend systems and RESTful APIs using Python and Flask. - Be proficient in concurrent processing strategies and performance optimization for complex architectures. - Write clean, maintainable, and well-documented code. - Develop comprehensive test suites to ensure code quality and reliability. - Work independently to deliver features and fix issues, with a few hours of overlap for real-time collaboration. - Integrate backend services with databases and APIs. - Collaborate asynchronously with cross-functional team members. - Participate in occasional team meetings, code reviews, and planning sessions. Core/Must Have Skills: - Should have a minimum of 6+ years of Professional Python Development experience. - Strong understanding of Computer science fundamentals (Data Structures, Algorithms). - 6+ years of experience in Flask and Restful API Development. - Knowledge of container technologies (Dockers, Kubernetes). - Experience in implementing interfaces in Python. - Ability to use Python generators for efficient memory management. - Good understanding of Pandas, NumPy, and Matplotlib library for data analytics and reporting. - Implementation of multi-threading and parallelism in Python. - Knowledge of SQL Alchemy to interact with databases. - Experience in implementing ETL transformations using Python libraries. - Collaborate with cross-functional teams to ensure successful implementation techniques of performing list compressions in Python of solutions. Good to have: - Exposure to Data Science libraries or data-centric development. - Understanding of authentication and authorization (e.g. JWT, OAuth). - Basic knowledge of frontend technologies (HTML/CSS/JavaScript) is a bonus but not required. - Experience with cloud services (AWS, GCP, or Azure). EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people, and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform, and operate. Working across assurance, consulting, law, strategy, tax, and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
sonipat, haryana
On-site
As a Data Engineer + Subject Matter Expert in Data Mining at Newton School of Technology, you will play a crucial role in revolutionizing technology education and empowering students to bridge the employability gap in the tech industry. You will have the opportunity to develop and deliver engaging lectures, mentor students, and contribute to the academic and research environment of the Computer Science Department. Your key responsibilities will include developing comprehensive lectures on "Data Mining", BigData, and Data Analytics courses, covering foundational concepts to advanced techniques. You will guide students on the complete data lifecycle, including preprocessing, cleaning, transformation, and feature engineering. Teaching a wide range of algorithms for Classification, Association rules mining, Clustering, and Anomaly Detections will be a part of your role. Moreover, you will design practical lab sessions, grade assessments, mentor students on projects, and stay updated with the latest advancements in data engineering and machine learning to ensure the curriculum remains cutting-edge. To excel in this role, you are required to have a Ph.D. or a Master's degree with significant industry experience in Computer Science, Data Science, Artificial Intelligence, or related fields. Your expertise in data engineering and machine learning concepts, proficiency in Python and its data science ecosystem, experience in teaching complex topics at the undergraduate level, and excellent communication skills are essential qualifications. Preferred qualifications include a record of academic publications, industry experience as a Data Scientist or in a similar role, familiarity with big data technologies and deep learning frameworks, and experience in mentoring student teams for data science competitions or hackathons. By joining Newton School of Technology, you will be offered competitive salary packages, access to advanced labs and facilities, and the opportunity to be part of a forward-thinking academic team shaping the future of tech education. If you are passionate about transforming technology education, empowering students, and staying at the forefront of data engineering and machine learning, we are excited about the possibility of you joining our team at Newton School of Technology. For more information about our university, please visit our website: Newton School of Technology.,
Posted 1 week ago
2.0 - 11.0 years
0 Lacs
maharashtra
On-site
As a Data Analytics / BI professional with 2-3 years of experience in areas such as investment, real estate, taxation, finance, and accounts, you have an exciting opportunity to join CMS Computers Limited (INDIA) in Mumbai (Bhandup) on a Work From Office basis. It is important to note that CMS will consider candidates with an immediate to 15 days notice period. If you possess the required experience and skills, please review the job description provided below and feel free to contact us via email or phone to express your interest. For the CA Requirement position, we are looking for candidates with 8+ years of experience to lead the data Analytics / BI domain in Mumbai (Bhandup) on a Work From Office basis. The ideal candidate should have expertise in Python, particularly in data analytics projects, advanced SQL knowledge, experience working on various projects across different domains, and strong communication skills. This role involves managing a team and driving existing analytics products with a focus on technical aspects as well as management (70:30 ratio). Key Responsibilities: - Write effective, scalable code - Develop back-end components to enhance responsiveness and performance - Integrate user-facing elements into applications - Test and debug programs - Enhance functionality of existing systems - Implement security and data protection solutions - Prioritize feature requests - Collaborate with internal teams to understand user requirements and offer technical solutions Required Skills and Abilities: - 6 to 11 years of hands-on experience with Python - Strong understanding of Python data structures, data transformation, and algorithms - Proficiency in Python libraries like Pandas, numpy, requests, etc. - Experience using Python as a scripting language and for framework creation - Familiarity with SCM tools like Git, Bitbucket - Working knowledge of RDBMS and SQL - Experience with unit testing frameworks and performance testing - Good communication and debugging skills Desired Skills: - Experience as a Python Developer - Expertise in a popular Python framework like Django, Flask, or Pyramid - Knowledge of object-relational mapping (ORM) - Familiarity with front-end technologies like React JS, JavaScript, HTML5 - Team player with good problem-solving skills Good to Have: - Cloud experience - Big Data knowledge (Pyspark/Spark-Scala) - Knowledge of Kubernetes, dockerization - Experience with Terraform If you are interested in this position, please share your updated resume with Chaitali Saha at chaitali.saha@voqeoit.com. Kindly provide details on your Total Experience, Relevant Experience, Current CTC, Expected CTC, Notice Period, Negotiable Notice Period, Last Working Day (if not currently employed), and any existing job offers. We look forward to potentially welcoming you to the team at Voqeoit Technologies. Regards, Chaitali Saha Talent Acquisition Group Voqeoit Technologies Contact no: 8088779710 Website: www.voqeoit.com,
Posted 1 week ago
2.0 - 4.0 years
0 Lacs
Gurgaon, Haryana, India
Remote
About This Role 🧭 Role Summary: We’re seeking a dynamic System Engineer to design and deliver intelligent, scalable, and reliable data systems. This hybrid role combines data engineering, AI/ML integration, system reliability, and DevOps to accelerate data collection, enable intelligent workflows, and drive business impact. You’ll collaborate across engineering, data analytics, and business teams to build reusable frameworks, reduce time-to-value, and uphold engineering excellence. 🔧 Key Responsibilities 🚀 Data & AI Workflow Engineering Accelerate data collection at scale from millions of sources using robust, scalable pipelines. Design, build, and deploy workflows that combine AI/ML models with human-in-the-loop systems. Operate as a full-stack data engineer, taking projects from problem formulation to production. Develop APIs and services to expose data and model outputs for downstream consumption. 🛠️ System Engineering, Reliability & DevOps Build and maintain CI/CD pipelines for data and ML services using Azure DevOps or GitHub Actions. Implement observability (metrics, logs, traces) and reliability features (retries, circuit breakers, graceful degradation). Optimize data workflows and infrastructure for performance, scalability, and fault tolerance. 🧱 Platform & Framework Development Elevate development standards through reusable services, frameworks, templates, and documentation. Champion best practices in code quality, security, and automation across the engineering lifecycle. Collaborate with engineering teams across the business to improve time-to-value and share internal solutions. 🤝 Collaboration & Business Impact Collaborate with engineering teams across the business to improve time-to-value and share internal solutions. Present results and recommendations clearly to technical and non-technical audiences using compelling storytelling and visualizations. 🧠 Required Skills And Qualifications 2-4 years of experience in data engineering, machine learning, or system/platform engineering. Strong programming skills in Python/DotNet or Java; proficiency in SQL, DBT, and data orchestration tools (e.g., Airflow). Experience with containerization (Docker) and Kubernetes on Azure and/or AWS. Proficiency in CI/CD, Git, and cloud-native development. Familiarity with observability tools (Azure Monitor, Prometheus, Grafana) and data validation frameworks (e.g., Great Expectations). Familiarity with data science libraries (Pandas, NumPy, scikit-learn) and deploying ML models to production. Strong understanding of distributed systems, microservices, and API design. Bachelor’s or Master’s degree in computer science, Data Science, Engineering, or a related field. Our Benefits To help you stay energized, engaged and inspired, we offer a wide range of benefits including a strong retirement plan, tuition reimbursement, comprehensive healthcare, support for working parents and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about. Our hybrid work model BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock. About BlackRock At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress. This mission would not be possible without our smartest investment – the one we make in our employees. It’s why we’re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive. For additional information on BlackRock, please visit @blackrock | Twitter: @blackrock | LinkedIn: www.linkedin.com/company/blackrock BlackRock is proud to be an Equal Opportunity Employer. We evaluate qualified applicants without regard to age, disability, family status, gender identity, race, religion, sex, sexual orientation and other protected attributes at law.
Posted 1 week ago
5.0 - 8.0 years
11 - 15 Lacs
Bengaluru
Work from Office
This job might be for you if You enjoy solving problems. You love taking on difficult challenges and finding creative solutions. You dont know the answer but will dig until you find it. You communicate clearly. You write well. You are motivated and driven. You volunteer for new challenges without waiting to be asked. You will take ownership of the time you spend with us and make a difference. You can impress our customers with your enthusiasm to solve their issues (and solve them!) Job Description As an AI/ML Engineer at Eximietas, you will be responsible for designing, implementing and managing strategic consulting projects that leverage Machine Learning, Deep Learning including Generative AI algorithms. You will work closely with clients to understand their business processes and requirements, conduct proof of concepts (POCs) and convert POCs to long term strategic engagements to deliver cutting-edge AI/ML solutions that drives business value. Responsibilities Collaborate with clients to understand their business processes, goals, and requirements. Design and implement AI/ML solutions using state-of-the-art techniques and methodologies. Conduct proof of concepts (POCs) and pilots to validate the feasibility and effectiveness of AI/ML solutions. Work with cross-functional teams to map AI/ML solutions to client business processes. Perform data analysis, data preprocessing, and feature engineering as needed. Conduct hypothesis testing, feature extraction, feature selection, cross-validation. Develop and deploy machine learning models in a production environment. Apply Natural Language Processing (NLP) techniques such as topic modelling, entity extraction, summarization and sentiment analysis to relevant projects OR Design and implement projects that are tailored to Computer Vision tasks, such as Image Classification, Object detection, Image segmentation. Communicate findings and insights effectively to both technical and non-technical stakeholders. Ensure project deliverables meet high-quality standards and are completed on time and within budget. Qualifications Bachelors or masters degree in computer science, Machine Learning, Data Science or a related field. At least 5 to 8 years of hands-on experience in designing and delivering AI/ML solutions. Good understanding of concepts involving Probability, Statistics, Linear Algebra, Programming and basics of Machine Learning Strong expertise in NLP tasks, including and not limited to Topic modelling, Entity Extraction, Summarization and Sentiment analysis OR Proficiency in Computer Vision tasks including and not limited to Object detection, Image segmentation, Image classification. Experience working within a cloud solutions environment such as AWS, Azure or Google Cloud. Familiarity with Cloud AI services and tools is a plus. Proficiency in programming languages such as Python or R. Familiarity with popular AI/ML libraries and frameworks (e.g., TensorFlow, PyTorch, scikit-learn). Familiarity with MLOps is a plus Excellent problem-solving skills and the ability to work in a fast-paced, collaborative environment. Strong communication and presentation skills.
Posted 1 week ago
8.0 - 11.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Title: Python Developer – Data Science & GenAI Solutions Experience: 8-11 years Department: AI/ML Engineering Band – C Job Summary We are seeking a highly motivated and skilled Python Developer with a strong foundation in automation, data science, machine learning, and NLP . The ideal candidate will also have hands-on experience or working knowledge of Generative AI (GenAI) techniques, particularly with Retrieval-Augmented Generation (RAG) implementations. This role requires the ability to work collaboratively across teams, write clean and scalable code, and develop intelligent solutions that drive impact. Key Responsibilities Develop and maintain Python-based automation scripts and pipelines for data ingestion, transformation, and model deployment. Build, train, and deploy machine learning models for predictive analytics and classification/regression tasks. Perform text analytics and Natural Language Processing (NLP), including text preprocessing, named entity recognition (NER), sentiment analysis, and topic modeling. Design and implement Generative AI solutions, including RAG pipelines, using tools like LangChain, LlamaIndex, or similar frameworks. Collaborate with data scientists and DevOps engineers to deploy solutions using cloud-native technologies (AWS preferred). Integrate models into production systems and ensure continuous delivery using version control systems like GitHub. Document code, workflows, and modeling decisions for both technical and non-technical stakeholders. Required Skills And Qualifications Strong proficiency in Python and related libraries (e.g., Pandas, NumPy, Scikit-learn, FastAPI/Flask). Experience with automation frameworks and scripting tools for ETL or system processes. Solid background in data science and ML model development, including model evaluation and optimization. Working knowledge of NLP libraries (e.g., SpaCy, NLTK, HuggingFace Transformers). Familiarity with GenAI technologies, including prompt engineering, fine-tuning, and RAG architecture. Hands-on experience with Git/GitHub for version control and collaboration. Understanding of cloud-native architecture and ability to work in cloud environments (AWS). Ability to write modular, reusable, and well-tested code. Preferred Qualifications Exposure to MLOps practices including CI/CD for ML, model monitoring, and pipelines. Experience with vector databases (e.g., Pinecone, FAISS, Weaviate) for RAG-based solutions. Knowledge of REST APIs and microservices architecture.
Posted 1 week ago
0.0 - 1.0 years
0 - 3 Lacs
Pune
Work from Office
About the Role: We are looking for a passionate and driven AI/ML Engineer to join our team. This entry-level position is ideal for recent graduates or professionals with up to 1 year of experience in machine learning, data science, or artificial intelligence. Youll get the opportunity to work on real-world AI problems and contribute to projects involving data processing, model development, and deployment. Key Responsibilities: Assist in developing and deploying machine learning models and AI solutions Preprocess, clean, and analyze large datasets from diverse sources Support in building predictive models using supervised and unsupervised learning techniques Work with senior team members on research, model training, and tuning Document processes, models, and results Collaborate with software engineers and domain experts for integration of ML models Required Skills: Basic understanding of Python, NumPy, Pandas, and Scikit-learn Familiarity with machine learning algorithms (Linear Regression, Decision Trees, etc.) Knowledge of data preprocessing techniques and model evaluation Experience with Jupyter Notebooks Strong analytical and problem-solving skills Ability to learn quickly and work in a team Preferred Skills: Exposure to TensorFlow, Keras, or PyTorch Basic knowledge of NLP, Computer Vision, or Reinforcement Learning Understanding of deployment tools (e.g., Flask, FastAPI, Docker) Education: Bachelor’s degree in Computer Science, Data Science, Engineering, or a related field. What We Offer: Exposure to real AI/ML projects Learning and development opportunities A collaborative and innovative work environment Flexible work culture and supportive mentors
Posted 1 week ago
2.0 - 3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
We are in search of a dedicated Software Developer with at least 2- 3 years of experience to join our SDE team with responsibility to maintain and build our web applications and data extraction pipelines. The ideal candidate should have a good foundation in Python programming and web technologies and be familiar with database technologies such as MongoDB and PostgreSQL. Core Skills: Python (including Pandas, NumPy), Django/Flask, HTML, CSS, JavaScript, MongoDB, PostgreSQL, and RESTful API design. Responsibilities: ● Maintain and develop web applications. ● Build and maintain data extraction pipelines. ● Write clean, efficient, and well-documented code. ● Collaborate with team members on project development and problem-solving. ● Ability to learn from mistakes and work on it. ● Participate in automation of existing processes to streamline operations for both our team and external team, making work flows smoother and more efficient. ● Contribute to code reviews, shaping our best practices ● Troubleshoot and debug issues in existing applications. ● Understand and utilize relevant open source libraries and tools. ● Be flexible to learn new technologies as per requirements and build solutions out of it. ● Understanding of JIRA. Required Skills: ● At least 2-3 years of professional experience in software development. ● Experience with web development frameworks such as Django. ● Experience with Django Rest Framework to build APIs. ● Proficiency in Python and experience with the Pandas library for data-manipulation and transformation is a must. ● Familiarity with web fundamentals: HTML, CSS, and JavaScript. ● Familiarity with MongoDB and PostgreSQL, knowledge of version control systems (e.g., Git) ● An independent thinker, able to drive tasks forward without constant oversight. ● Experience with RESTful API design and implementation.
Posted 1 week ago
8.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. RCE-Risk Data Engineer-Leads Job Description: - Our Technology team builds innovative digital solutions rapidly and at scale to deliver the next generation of Financial and Non- Financial services across the globe. The Position is a senior technical, hands-on delivery role, requiring the knowledge of data engineering, cloud infrastructure and platform engineering, platform operations and production support using ground-breaking cloud and big data technologies. The ideal candidate with 8-10 years of relevant experience, will possess strong technical skills, an eagerness to learn, a keen interest on 3 keys pillars that our team support i.e. Financial Crime, Financial Risk and Compliance technology transformation, the ability to work collaboratively in fast-paced environment, and an aptitude for picking up new tools and techniques on the job, building on existing skillsets as a foundation. In this role you will: Develop, maintain and optimize backend systems and RESTFul APIs using Python and Flask Proficient in concurrent processing strategies and performance optimization for complex architectures Write clean, maintainable and well-documented code Develop comprehensive test suites to ensure code quality and reliability Work independently to deliver features and fix issues, with a few hours of overlap for real-time collaboration Integrate backend services with databases and APIs Collaborate asynchronously with cross functional team members Participate in occasional team meetings, code reviews and planning sessions. Core/Must Have Skills. Should have minimum 6+ years of Professional Python Development experience. Should have Strong understanding of Computer science fundamentals (Data Structures, Algorithms). Should have 6+ years of experience in Flask and Restful API Development Should possess Knowledge on container technologies (Dockers, Kubernetes) Should possess experience on implementing interfaces in Python Should know how to use python generators for efficient memory management. Should have good understanding of Pandas, NumPy and Matplotlib library for data analytics and reporting. Should know how to implement multi-threading and enforce parallelism in python. Should know to various. Should know to how to use Global interpreter lock (GIL) in python and its implications on multithreading and multiprocessing. Should have a good understanding of SQL alchemy to interact with databases. Should posses’ knowledge on implementing ETL transformations using python libraries. Collaborate with cross-functional teams to ensure successful implementation techniques of performing list compressions in python of solutions. Good to have: Exposure to Data Science libraries or data-centric development Understanding of authentication and authorization (e.g. JWT, OAuth) Basic knowledge of frontend technologies (HTML/CSS/JavaScript) is a bonus but not required. Experience with cloud services (AWS, GCP or Azure) EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 week ago
6.0 - 11.0 years
7 - 11 Lacs
Bengaluru
Work from Office
Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount. With our presence across 32 cities across globe, we support 100+ clients acrossbanking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage. Location - Bangalore Skills and Qualifications : At least 6+ years' relevant experience would generally be expected to find the skills required for this role. 6+ years of being a practitioner in data engineering or a related field. Strong programming skills in Python, with experience in data manipulation and analysis libraries (e.g., Pandas, NumPy, Dask). Proficiency in SQL and experience with relational databases (e.g., Sybase, DB2, Snowflake, PostgreSQL, SQL Server). Experience with data warehousing concepts and technologies (e.g., dimensional modeling, star schema, data vault modeling, Kimball methodology, Inmon methodology, data lake design). Familiarity with ETL/ELT processes and tools (e.g., Informatica PowerCenter, IBM DataStage, Ab Initio) and open-source frameworks for data transformation (e.g., Apache Spark, Apache Airflow). Experience with message queues and streaming platforms (e.g., Kafka, RabbitMQ). Experience with version control systems (e.g., Git). Experience using Jupyter notebooks for data exploration, analysis, and visualization. Excellent communication and collaboration skills. Ability to work independently and as part of a geographically distributed team. Nice to have Understanding of any cloud-based application development & Dev Ops. Understanding of business intelligence tools - Tableau,PowerBI Understanding of Trade Lifecycle / Financial markets. If you are keen to join us, you will be part of an organization that values your contributions, recognizes your potential, and provides ample opportunities for growth. For more information, visit www.capco.com. Follow us on Twitter, Facebook, LinkedIn, and YouTube.
Posted 1 week ago
3.0 - 5.0 years
6 - 10 Lacs
Hyderabad
Work from Office
Educational Requirements Master Of Comp. Applications,Master Of Science,Master Of Technology,Bachelor of Engineering,Bachelor Of Technology (Integrated) Service Line Application Development and Maintenance Responsibilities A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to ensure Development, Validation and Support activities, to assure that our clients are satisfied with the high levels of service in the technology domain. You will gather the requirements and specifications to understand the client requirements in a detailed manner and translate the same into system requirements. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Writing efficient, reusable, testable, and scalable code, Integration of user-oriented elements into different applications, data storage solutions, Keeping abreast with the latest technology and trendsIf you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Additional Responsibilities: Knowledge of design principles and fundamentals of architecture Basic understanding of project domain Writing scalable code using Python programming language. Ability to translate functional / nonfunctional requirements to systems requirements Ability to design and code complex programs Ability to write test cases and scenarios based on the specifications Good understanding of agile methodologies Awareness of latest technologies and trends Logical thinking and problem-solving skills along with an ability to collaborate Technical and Professional Requirements: Primary skillsPython Django, Flask, Pandas, Numpy, Pyramid Preferred Skills: Technology-OpenSystem-Python - OpenSystem-Python
Posted 1 week ago
3.0 - 5.0 years
6 - 10 Lacs
Bengaluru
Work from Office
Educational Requirements Master Of Comp. Applications,Master Of Science,Master Of Technology,Bachelor of Engineering,Bachelor Of Technology (Integrated) Service Line Application Development and Maintenance Responsibilities A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to ensure Development, Validation and Support activities, to assure that our clients are satisfied with the high levels of service in the technology domain. You will gather the requirements and specifications to understand the client requirements in a detailed manner and translate the same into system requirements. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Writing efficient, reusable, testable, and scalable code, Integration of user-oriented elements into different applications, data storage solutions, Keeping abreast with the latest technology and trendsIf you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Additional Responsibilities: Knowledge of design principles and fundamentals of architecture Basic understanding of project domain Writing scalable code using Python programming language. Ability to translate functional / nonfunctional requirements to systems requirements Ability to design and code complex programs Ability to write test cases and scenarios based on the specifications Good understanding of agile methodologies Awareness of latest technologies and trends Logical thinking and problem-solving skills along with an ability to collaborate Technical and Professional Requirements: Primary skillsPython Django, Flask, Pandas, Numpy, Pyramid Preferred Skills: Technology-OpenSystem-Python - OpenSystem-Python
Posted 1 week ago
3.0 - 7.0 years
14 - 18 Lacs
Gurugram
Work from Office
As a Generative AI Solution Architect with IBM Consulting, you are primarily responsible for designing and implementing complex GenAI solutions using IBM WatsonX and Ecosystem Partner Stacks (Microsoft Azure / Open AI, AWS, Aleph Alpha, as well as Open Source). In addition, you are supporting business development, sales as well as the delivery of consulting and system integration projects in Data & AI for our clients. * You have end-to-end technical responsibility during the acquisition, design and delivery of technically complex GenAI projects at scale * You are accountable for the development and productive deployment of scalable Generative AI applications and platforms, particularly within (hybrid) cloud architectures. * You provide consultation and support to clients and colleagues in architecting and selecting the right technology stack for flexible, scalable, and economical GenAI solutions. * You guide and support clients and colleagues in the adoption of development and operational processes for AI solutions, such as Agile DevOps, FinOps, Trustworthy AI and MLOps methodologies. * You stay abreast of the latest developments in the Artificial Intelligence market and research environment, actively participating in knowledge transfer within IBM Consulting, especially when it comes to mentoring junior team members and delivery teams. * You also develop the strategy, vision, and roadmap for GenAI architectures within our consulting business, contributing to both our immediate sales objectives and long-term business growth. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Professional with at least 5-10 years of experience ideally in data and analytics and/or architecture, including 3 years in the design, build, and implementation of AI/Deep Learning & GenAI solutions. * Experienced in architecting AI solutions and managing delivery of highly technical analytics use cases. * Conversant with technical stacks used to support Generative AI use cases (AWS, Google, Microsoft, Watson X). * Familiar with relevant concepts (eg transformer model architectures, prompt engineering, model fine tuning, retrieval augmented generation architectures) and models/technologies (Microsoft Azure / Open AI, AWS, Aleph Alpha, Hugging Face etc as well as Open Source). * Very good at stakeholder management and influencing skills, consultancy skills a very strong plus. * Able to convey complex technical concepts to non-technical stakeholder Preferred technical and professional experience Experienced in architecting AI solutions and managing delivery of highly technical analytics use cases. * Conversant with technical stacks used to support Generative AI use cases (AWS, Google, Microsoft, Watson X). * Familiar with relevant concepts (eg transformer model architectures, prompt engineering, model fine tuning, retrieval augmented generation architectures) and models/technologies (Microsoft Azure / Open AI, AWS, Aleph Alpha, Hugging Face etc as well as Open Source). * Very good at stakeholder management and influencing skills, consultancy skills a very strong plus. * Able to convey complex technical concepts to non-technical stakeholders.* Fluent in English
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France