Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
7.0 years
0 Lacs
Noida
On-site
Your Role The technology that once promised to simplify patient care has brought more issues than anyone ever anticipated. At Innovaccer, we defeat this beast by making full use of all the data Healthcare has worked so hard to collect, and replacing long-standing problems with ideal solutions. Data is our bread and butter for innovation. We are looking for a Staff Data Scientist who understands healthcare data and can leverage the data to build algorithms to personalize treatments based on the clinical and behavioral history of patients. We are looking for a superstar who will define and build the next generation of predictive analytics tools in healthcare. Analytics at Innovaccer Our analytics team is dedicated to weaving analytics and data science magics across our products. They are the owners and custodians of intelligence behind our products. With their expertise and innovative approach, they play a crucial role in building various analytical models (including descriptive, predictive, and prescriptive) to help our end-users make smart decisions. Their focus on continuous improvement and cutting-edge methodologies ensures that they're always creating market leading solutions that propel our products to new heights of success A Day in the Life Design and lead the development of various artificial intelligence initiatives to help improve health and wellness of patients Work with the business leaders and customers to understand their pain-points and build large-scale solutions for them. Define technical architecture to productize Innovaccer’s machine-learning algorithms and take them to market with partnerships with different organizations Proven ability to break down complex business problems into machine learning problems and design solution workflows. Work with our data platform and applications team to help them successfully integrate the data science capability or algorithms in their product/workflows. Work with development teams to build tools for repeatable data tasks that will accelerate and automate development cycle. Define and execute on the quarterly roadmap What You Need Masters in Computer Science, Computer Engineering or other relevant fields (PhD Preferred) 7+ years of experience in Data Science (healthcare experience will be a plus) Strong written and spoken communication skills Strong hands-on experience in Python - building enterprise applications alongwith optimization techniques. Strong experience with deep learning techniques to build NLP/Computer vision models as well as state of art GenAI pipelines - knowledge of implementing agentic workflows is a plus. Has demonstrable experience deploying deep learning models in production at scale with interactive improvements- would require hands-on expertise with at least 1 deep learning frameworks like Pytorch or Tensorflow. Has keen interest in research and stays updated with key advancements in the area of AI and ML in the industry. Deep understanding of classical ML techniques - Random Forests, SVM, Boosting, Bagging - and building training and evaluation pipelines. Demonstrate experience with global and local model explainability using LIME, SHAP and associated techniques. Hands on experience with at least one ML platform among Databricks, Azure ML, Sagemaker s Experience in developing and deploying production ready models Knowledge of implementing an MLOps framework. Possess a customer-focused attitude through conversations and documentation We offer competitive benefits to set you up for success in and outside of work. Here’s What We Offer Generous Leave Benefits: Enjoy generous leave benefits of up to 40 days. Parental Leave: Experience one of the industry's best parental leave policies to spend time with your new addition. Sabbatical Leave Policy: Want to focus on skill development, pursue an academic career, or just take a break? We've got you covered. Health Insurance: We offer health benefits and insurance to you and your family for medically related expenses related to illness, disease, or injury. Pet-Friendly Office*: Spend more time with your treasured friends, even when you're away from home. Bring your furry friends with you to the office and let your colleagues become their friends, too. *Noida office only Creche Facility for children*: Say goodbye to worries and hello to a convenient and reliable creche facility that puts your child's well-being first. *India offices Where and how we work Our Noida office is situated in a posh techspace, equipped with various amenities to support our work environment. Here, we follow a five-day work schedule, allowing us to efficiently carry out our tasks and collaborate effectively within our team. Innovaccer is an equal-opportunity employer. We celebrate diversity, and we are committed to fostering an inclusive and diverse workplace where all employees, regardless of race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, marital status, or veteran status, feel valued and empowered. Disclaimer: Innovaccer does not charge fees or require payment from individuals or agencies for securing employment with us. We do not guarantee job spots or engage in any financial transactions related to employment. If you encounter any posts or requests asking for payment or personal information, we strongly advise you to report them immediately to our Px department at px@innovaccer.com. Additionally, please exercise caution and verify the authenticity of any requests before disclosing personal and confidential information, including bank account details. About Innovaccer Innovaccer Inc. is the data platform that accelerates innovation. The Innovaccer platform unifies patient data across systems and care settings, and empowers healthcare organizations with scalable, modern applications that improve clinical, financial, operational, and experiential outcomes. Innovaccer’s EPx-agnostic solutions have been deployed across more than 1,600 hospitals and clinics in the US, enabling care delivery transformation for more than 96,000 clinicians, and helping providers work collaboratively with payers and life sciences companies. Innovaccer has helped its customers unify health records for more than 54 million people and generate over $1.5 billion in cumulative cost savings. The Innovaccer platform is the #1 rated Best-in-KLAS data and analytics platform by KLAS, and the #1 rated population health technology platform by Black Book. For more information, please visit innovaccer.com. Check us out on YouTube, Glassdoor, LinkedIn, and innovaccer.com.
Posted 2 weeks ago
5.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Position: Lead Technical Consultant Location: Mumbai, Pune & Vapi Type: Full-Time Contact us on: nishant@fristinetech.com ; hr@fristinetech.com About Fristine Infotech About Fristine Infotech Fristine Infotech partners with leading Indian manufacturers, retailers, distributors, and logistics companies to deliver end-to-end digital-transformation and AI-driven solutions. Our consulting group combines deep MRDL domain knowledge, low-code expertise on Zoho, and advanced AI/ML capabilities to create measurable value for clients. Role Overview You will lead the strategy, design, and delivery of AI-enabled digital-transformation programmes within the MRDL domain. The remit spans process discovery and mapping, Zoho solution architecture, generative-AI prototyping, governance, and project leadership—acting as the bridge between senior client stakeholders and internal development, data-science, and change-management teams. Key Responsibilities Strategy & Advisory Define AI and digital-transformation roadmaps tailored to MRDL business objectives and compliance standards. Identify and prioritise use-cases for predictive analytics, generative-AI assistants, autonomous workflows, process optimization, and efficiency enhancements using Zoho applications. Process & Solution Design Facilitate workshops to document current "as-is" MRDL processes and design optimized "to-be" states using BPMN / Zoho Blueprint. Translate domain-specific requirements into data pipelines, ML feature sets, and prompt libraries for effective developer execution. AI Development & Governance Oversee model development, validation, and MLOps implementations for predictive analytics, NLP, and generative-AI use-cases specific to MRDL. Implement responsible-AI controls covering fairness, privacy, security, and regulatory compliance. Configuration & Testing Configure, customize, and optimize Zoho applications, including Zoho CRM, Zoho Inventory, Zoho Creator, Zoho Projects, and Zoho Analytics, tailored for MRDL requirements. Collaborate closely with developers to create MRDL-specific customizations and integrations within the Zoho ecosystem. Conduct detailed functional testing, create MRDL-focused test cases, execute tests, and document outcomes. Project Governance & Stakeholder Leadership Mentor Zoho developers and data-science teams, encouraging continuous innovation in MRDL solutions. Provide comprehensive training and support to end-users, ensuring seamless adoption and effective utilization of Zoho solutions. Manage budgets, sprint plans, and resource allocations; regularly report progress to senior MRDL executives. Security & Risk Perform AI threat modeling specific to MRDL operations and recommend secure-by-design model pipelines across cloud platforms. Preferred Skills Core Requirements 5+ years technology consulting experience in MRDL domain with live AI/ML implementations Proven experience with Zoho Creator/CRM implementations and production ML solutions (TensorFlow, PyTorch, LangChain) Proficiency in Python, SQL, and cloud AI services (AWS SageMaker, Azure OpenAI, Google Vertex) Expertise in BPMN 2.0 or UML modeling; excellent stakeholder-workshop facilitation skills Preferred Assets MBA (Operations/Supply Chain) or M.Tech (AI) Experience with RPA or other low-code platforms (OutSystems, Mendix) Exposure to Edge-AI / IoT analytics in manufacturing or logistics Familiarity with scaled-agile frameworks (SAFe, LeSS) Soft-Skill Profile We value analytical rigor, persuasive storytelling, adaptability, stakeholder influence, and an ethical mindset—competencies essential for driving impactful transformations in leading MRDL organizations. What We Offer Signature MRDL + AI transformation projects with marquee clients Flat organizational hierarchy, rapid promotion opportunities, and annual certification support Competitive compensation, comprehensive healthcare benefits, and a culture fostering inclusive innovation
Posted 2 weeks ago
6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Total yrs of exp: 7+ yrs Balewadi,Pune Location Immediate to 30 Days only Responsibilities - Overall 6+ years of experience, out of which in 5+ in AI, ML and Gen AI and related technologies Proven track record of leading and scaling AI/ML teams and initiatives Strong understanding and hands-on experience in AI, ML, Deep Learning, and Generative AI concepts and applications Expertise in ML frameworks such as PyTorch and/or TensorFlow Experience with ONNX runtime, model optimization and hyperparameter tuning Solid Experience of DevOps, SDLC, CI/CD, and MLOps practices - DevOps/MLOps Tech Stack: Docker, Kubernetes, Jenkins, Git, CI/CD, RabbitMQ, Kafka, Spark, Terraform, Ansible, Prometheus, Grafana, ELK stack Experience in production-level deployment of AI models at enterprise scale Proficiency in data preprocessing, feature engineering, and large-scale data handling Expertise in image and video processing, object detection, image segmentation, and related CV tasks Proficiency in text analysis, sentiment analysis, language modeling, and other NLP applications Experience with speech recognition, audio classification, and general signal processing techniques Experience with RAG, VectorDB, GraphDB and Knowledge Graphs Extensive experience with major cloud platforms (AWS, Azure, GCP) for AI/ML deployments. Proficiency in using and integrating cloud-based AI services and tools (e.g., AWS SageMaker, Azure ML, Google Cloud AI) Qualifications - [Education details] Required Skills Strong leadership and team management skills Excellent verbal and written communication skills Strategic thinking and problem-solving abilities Adaptability and adapting to the rapidly evolving AI/ML landscape Strong collaboration and interpersonal skills Ability to translate market needs into technological solutions Strong understanding of industry dynamics and ability to translate market needs into technological solutions Demonstrated ability to foster a culture of innovation and creative problem-solving Preferred Skills Pay range and compensation package - [Pay range or salary or compensation] Equal Opportunity Statement - [Include a statement on commitment to diversity and inclusivity.]
Posted 2 weeks ago
10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description : Principal AI Architect Employment Type : Full-Time. Relevant Experience : 10+ years Key Responsibilities AI-First Leadership : Define and drive DB Techs AI vision, re-architect systems into AI-native services, integrate tools like Cursor/Relevance AI, and mentor teams in prompt engineering, Vibe Coding, and autonomous testing. Architect Scalable AI Systems : Design enterprise-scale AI/ML platforms that support real-time analytics, model deployment, and continuous learning in financial products and services. Lead Solution Design : Collaborate with data scientists, engineers, and business stakeholders to build and integrate AI models into core platforms (e.g., risk engines, transaction monitoring, robo-advisors). Ensure Governance & Compliance : Implement AI systems that meet financial regulations (e.g., GDPR, PCI-DSS, FFIEC, Basel III) and uphold fairness, explainability, and accountability. Drive MLOps Strategy : Establish and maintain robust pipelines for data ingestion, feature engineering, model training, testing, deployment, and monitoring. Team Leadership : Provide technical leadership to data science and engineering teams. Promote best practices in AI ethics, version control, and reproducibility. Identify areas where AI can deliver business value and lead the development of proofs-of-concept (PoCs). Evaluate the feasibility, cost, and impact of new AI initiatives. Define best practices\standards for model lifecycle management (training, validation, deployment, monitoring) - Evaluate Emerging Technologies : Stay ahead of developments in generative AI, LLMs, and FinTech- specific AI tools, and drive their strategic adoption. Technical Skills And Tools ML & AI Frameworks : Scikit-learn, XGBoost, LightGBM, TensorFlow, PyTorch Hugging Face Transformers, OpenAI APIs (for generative and NLP use cases) MLOps & Deployment MLflow, Kubeflow, Seldon Core, KServe, Weights & Biases FastAPI, gRPC, Docker, Kubernetes, Airflow FinTech-Specific Applications Credit scoring models, Fraud detection algorithms Time series forecasting, NLP for financial documents/chatbots Algorithmic trading models, AML (Anti-Money Laundering) systems Cloud & Data Platforms AWS SageMaker, Azure ML, Google Vertex AI Databricks, Snowflake, Kafka, BigQuery, Delta Lake Monitoring & Explainability SHAP, LIME, Alibi, Evidently AI, Arize AI IBM AIX 360, Fiddler AI Required Qualifications Bachelors or Masters degree in Computer Science, Artificial Intelligence, Data Science, or a related field; PhD is a plus. 10+ years of experience in AI/ML, including 35 years in architectural roles within FinTech or other highly regulated industries. Proven track record of building and deploying AI solutions in areas such as fraud detection, credit risk modeling, or portfolio optimization. Strong Hands-on Expertise In Machine Learning : scikit-learn, XGBoost, LightGBM, TensorFlow, PyTorch Data Engineering : Spark, Kafka, Airflow, SQL/NoSQL (MongoDB, Neo4j) Cloud & MLOps : AWS, GCP, or Azure; Docker, Kubernetes, MLflow, SageMaker, Vertex AI Programming : Python (primary); Java or Scala (optional) Solid software engineering background with experience integrating ML models into scalable production systems. Excellent communication skills with the ability to influence both technical and non-technical stakeholders. (ref:hirist.tech)
Posted 2 weeks ago
5.0 years
0 Lacs
Greater Kolkata Area
Remote
Job Title : ML Ops Engineer. Experience : 5 Years. Location : Remote (India). Job Overview We are seeking a highly skilled MLOps Engineer with over 5 years of experience in Software Engineering and Machine Learning Operations. The ideal candidate will have hands-on experience with AWS (particularly SageMaker), MLflow, and other MLOps tools, and a strong understanding of building scalable, secure, and production-ready ML systems. Key Responsibilities Design, implement, and maintain scalable MLOps pipelines and infrastructure. Work with cross-functional teams to support end-to-end ML lifecycle including model development, deployment, monitoring, and governance. Leverage AWS services, particularly SageMaker, to manage model training and deployment. Apply best practices for CI/CD, model versioning, reproducibility, and operational monitoring. Participate in MLOps research and help drive innovation across the team. Contribute to the design of secure, reliable, and scalable ML solutions in a production environment. Required Skills 5+ years of experience in Software Engineering and MLOps. Strong experience with AWS, especially SageMaker. Experience with MLflow or similar tools for model tracking and lifecycle management. Familiarity with AWS Data Zone is a strong plus. Proficiency in Python; experience with R, Scala, or Apache Spark is a plus. Solid understanding of software engineering principles, version control, and testing practices. Experience deploying and maintaining models in production environments. Preferred Attributes Strong analytical thinking and problem-solving skills. A proactive mindset with the ability to contribute to MLOps research and process improvements. Self-motivated and able to work effectively in a remote setting. (ref:hirist.tech)
Posted 2 weeks ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Embark on a transformative journey as an AI Incubator Data Scientist at Barclays, where you'll spearhead the evolution of our digital landscape, driving innovation and excellence. You'll harness cutting-edge technology to revolutionize our digital offerings, ensuring unapparelled customer experiences. Seeking a talented Data Scientist to join our Data Science team in particular our AI incubator where you'll spearhead the evolution of our digital landscape, driving innovation dedicated to driving commercial value from data across the Corporate Bank. In this role, you will have the opportunity to shape and lead projects with other members of the team - leveraging machine learning techniques and AI to extract valuable insights and create tangible business impact. We embrace a design-led thinking approach that emphasizes close collaboration with business stakeholders and a fail-fast mindset. This position offers the chance to work with extensive datasets that span billions of rows, applying state-of-the-art AI techniques to unlock millions in business value. To be a successful AI Incubator- Data Scientist you should have experience with: Bachelor or masters degree in data science (or similar) or equivalent commercial experience Experience with working with senior stakeholders across the data science life cycle from inception of an idea to the delivery within a production environment Ability to translate technical work into business terms and communicate value to business stakeholders. Commercial experience in applying machine learning techniques to generate business value (e.g. clustering, classification, regression, NLP techniques) Ability to write close to production standard code with strong understanding of key coding principals e.g. separation of concerns, generalisation of code. Strong experience in Python and SQL is a must. Familiarity with AWS data science and AI services, including SageMaker and Bedrock Understanding of GenAI , LLMs and the RAG framework Proficiency in data visualization software, ideally Tableau, with exposure to QlikView, PowerBI, or similar tools Knowledge of code management methodologies and ability to implement these on projects. Some Other Highly Valued Skills May Include Experience in front end development frameworks such as Angular/React Flask/Streamlit. MVC Architecture. Experience of Agentic AI. Experience of working in a big data environment using PySpark. You may be assessed on key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen, strategic thinking and digital and technology, as well as job-specific technical skills. This role will be based in Pune. Purpose of the role To use innovative data analytics and machine learning techniques to extract valuable insights from the bank's data reserves, leveraging these insights to inform strategic decision-making, improve operational efficiency, and drive innovation across the organisation. Accountabilities Identification, collection, extraction of data from various sources, including internal and external sources. Performing data cleaning, wrangling, and transformation to ensure its quality and suitability for analysis. Development and maintenance of efficient data pipelines for automated data acquisition and processing. Design and conduct of statistical and machine learning models to analyse patterns, trends, and relationships in the data. Development and implementation of predictive models to forecast future outcomes and identify potential risks and opportunities. Collaborate with business stakeholders to seek out opportunities to add value from data through Data Science. Analyst Expectations To perform prescribed activities in a timely manner and to a high standard consistently driving continuous improvement. Requires in-depth technical knowledge and experience in their assigned area of expertise Thorough understanding of the underlying principles and concepts within the area of expertise They lead and supervise a team, guiding and supporting professional development, allocating work requirements and coordinating team resources. If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L – Listen and be authentic, E – Energise and inspire, A – Align across the enterprise, D – Develop others. OR for an individual contributor, they develop technical expertise in work area, acting as an advisor where appropriate. Will have an impact on the work of related teams within the area. Partner with other functions and business areas. Takes responsibility for end results of a team’s operational processing and activities. Escalate breaches of policies / procedure appropriately. Take responsibility for embedding new policies/ procedures adopted due to risk mitigation. Advise and influence decision making within own area of expertise. Take ownership for managing risk and strengthening controls in relation to the work you own or contribute to. Deliver your work and areas of responsibility in line with relevant rules, regulation and codes of conduct. Maintain and continually build an understanding of how own sub-function integrates with function, alongside knowledge of the organisations products, services and processes within the function. Demonstrate understanding of how areas coordinate and contribute to the achievement of the objectives of the organisation sub-function. Make evaluative judgements based on the analysis of factual information, paying attention to detail. Resolve problems by identifying and selecting solutions through the application of acquired technical experience and will be guided by precedents. Guide and persuade team members and communicate complex / sensitive information. Act as contact point for stakeholders outside of the immediate function, while building a network of contacts outside team and external to the organisation. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave.
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
The company Flentas specializes in assisting enterprises in maximizing the benefits of the Cloud through their consulting and implementation practice. With a team of experienced Solution Architects and Technology Enthusiasts, Flentas is dedicated to driving significant digital transformation projects and expanding cloud operations for clients worldwide. As a Generative AI Specialist at Flentas, based in Pune, you will be responsible for implementing and optimizing cutting-edge AI solutions. Your role will involve utilizing LangChain, LangGraph, and agentic frameworks, along with hands-on experience in Python and cloud-based AI services such as AWS Bedrock and SageMaker. Additionally, experience in fine-tuning models will be advantageous as you collaborate with the team to develop, deploy, and enhance AI-driven applications. Key Responsibilities: - Designing, developing, and implementing AI solutions using LangChain, LangGraph, and agentic frameworks. - Creating and maintaining scalable AI models utilizing AWS Bedrock and SageMaker. - Building and deploying AI-driven agents capable of autonomous decision-making and task execution. - Optimizing machine learning pipelines and workflows for production environments. - Collaborating with cross-functional teams to understand project requirements and deliver innovative AI solutions. - Conducting fine-tuning and customization of generative AI models (preferred skill). - Monitoring, evaluating, and improving the performance of AI systems. - Staying updated on the latest advancements in generative AI and related technologies. Required Skills: - Proficiency in LangChain, LangGraph, and agentic frameworks. - Strong programming skills in Python. - Experience with AWS Bedrock and SageMaker for AI model deployment. - Knowledge of AI and ML workflows, including model optimization and scalability. - Understanding of APIs, data integration, and AI-driven task automation. Preferred Skills (Good To Have): - Experience with fine-tuning generative AI models. - Familiarity with cloud-based architecture and services beyond AWS (e.g., GCP, Azure). - Knowledge of advanced NLP and transformer-based architectures. Qualifications: - Bachelors or Masters degree in Computer Science, AI, Machine Learning, or a related field. - 3-5 years of hands-on experience in AI development, focusing on generative models. - Demonstrated experience in deploying AI solutions in a cloud environment.,
Posted 2 weeks ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Company Description WNS (Holdings) Limited (NYSE: WNS), is a leading Business Process Management (BPM) company. We combine our deep industry knowledge with technology and analytics expertise to co-create innovative, digital-led transformational solutions with clients across 10 industries. We enable businesses in Travel, Insurance, Banking and Financial Services, Manufacturing, Retail and Consumer Packaged Goods, Shipping and Logistics, Healthcare, and Utilities to re-imagine their digital future and transform their outcomes with operational excellence.We deliver an entire spectrum of BPM services in finance and accounting, procurement, customer interaction services and human resources leveraging collaborative models that are tailored to address the unique business challenges of each client. We co-create and execute the future vision of 400+ clients with the help of our 44,000+ employees. Job Description Min Exp - 5-8 years Location - PAN India Engagement & Project Overview An AI model trainer brings specialised knowledge in developing and fine-tuning machine learning models. They can ensure that your models are accurate, efficient, and tailored to your specific needs. Hiring an AI model trainer and tester can significantly enhance our data management and analytics capabilities Job Description Expertise in Model Development: Develop and fine-tune machine learning models. Ensure models are accurate, efficient, and tailored to our specific needs. Quality Assurance: Rigorously evaluate models to identify and rectify errors. Maintain the integrity of our data-driven decisions through high performance and reliability. Efficiency and Scalability: Streamline processes to reduce time-to-market. Scale AI initiatives and ML engineering skills effectively with dedicated model training and testing. Production ML Monitoring & MLOps: Implement and maintain model monitoring pipelines to detect data drift, concept drift, and model performance degradation. Set up alerting and logging systems using tools such as Evidently AI, WhyLabs/Prometheus + Grafana or cloud-native solutions (AWS SageMaker Monitor, GCP Vertex AI, Azure Monitor). Collaborate with teams to integrate monitoring into CI/CD pipelines, using platforms like Kubeflow, MLflow, Airflow, and Neptune.ai. Define and manage automated retraining triggers and model versioning strategies. Ensure observability and traceability across the ML lifecycle in production environments. Qualifications Qualifications: 5+ years of experience in the respective field. Proven experience in developing and fine-tuning machine learning models. Strong background in quality assurance and model testing. Ability to streamline processes and scale AI initiatives. Innovative mindset with a keen understanding of industry trends. License/Certification/Registration
Posted 2 weeks ago
12.0 years
0 Lacs
Pune, Maharashtra, India
On-site
We are seeking a highly motivated Responsible and Secure AI Governance Specialist to join our Cyber team. The ideal candidate will be engaged in the design, implementation, and monitoring of governance frameworks that ensure the ethical, secure, and compliant deployment of AI technologies within our services. This role will collaborate closely with technology, security, compliance, legal, and business units to manage AI risks, uphold data privacy, and align AI systems with evolving regulatory standards Responsibilities Develop, implement, and maintain AI governance policies, standards, and best practices tailored for ITES environments Conduct AI risk assessments focused on model bias, fairness, security vulnerabilities, and compliance with data privacy laws (GDPR, HIPAA, etc.). Collaborate cross-functionally to embed security and ethical considerations into the AI/ML lifecycle, including data acquisition, model development, testing, deployment, and monitoring. Design and oversee continuous AI model monitoring processes to detect anomalies, bias, data drift, and security threats. Support incident response planning for AI-related security breaches or compliance issues. Provide training and awareness sessions on AI governance, ethics, and security best practices for internal teams. Stay current with AI governance frameworks, regulations, and emerging risks; advise leadership on necessary policy updates and strategic initiatives. Work with Technology and cloud teams to ensure AI systems align with organizational cybersecurity and data protection policies Prepare reports and dashboards for leadership to highlight AI governance metrics and compliance status. Research on AI regulations and ensure program alignment Subject Matter Expertise Proficiency in data privacy and cybersecurity best practices related to AI systems Experience with cloud AI platforms (AWS SageMaker, Azure AI, Google AI). Familiarity with AI ethics frameworks (e.g., NIST AI RMF, OECD AI Principles, EU AI Act) Knowledge of programming languages used in AI/ML (Python, R) Knowledge of AI governance platforms (e.g. Credo.ai, IBM's AI Fairness 360, Priva Sapien) and Certifications such as CISSP, CDPSE, or AI Governance-related credentials are a plus Strong understanding of AI/ML technologies and development lifecycleKnowledge of regulatory frameworks impacting AI and data (e.g., GDPR, HIPAA, CCPA) Hands-on experience with AI monitoring tools or platforms that support model auditing and anomaly detection Familiarity with AI fairness, bias mitigation, explainability, and robustness assessment techniques Thought Leadership Provide thought leadership to fellow team members across business and technical project dimensions solving complex business requirements. Demonstrate forward thinking around where the organization is going and how technology can support these efforts. Advocate and define security architecture vision from a strategic perspective, including internal and external platforms, tools and systems. Cross-Functional And Collaboration Drive scope definition, requirements analysis, functional and technical design, product configuration, and production deployment Ensure delivered solutions meet/perform to technical and functional/non-functional requirements. Provide technical expertise and ownership in the diagnosis and resolution of an issue, including the determination and provision of workaround solution or escalation to service owners. Ensure delivered solutions are realized in time frame committed; work in conjunction with project sponsors to size and manage scope and risk. Provide support and technical governance, expertise related to cloud architectures, deployment, and operations. Mentoring Act as the coach and mentor to team members and technical staff on their assigned project tasks. Lead the definition and development of cloud reference architecture and management systems. Conduct project reviews with team members. Requisites Bachelor's degree in computer science, computer engineering, information technology, or relevant field. Overall experience 12+ years with proven experience (3+ years) in AI governance, AI risk management, or AI security, preferably in ITES or technology-driven environments Positive attitude and a strong commitment to delivering quality work. Effective communication skills (written and verbal) to properly articulate complicated cloud architecture, reports to management. Excellent analytical, problem-solving, and communication skills
Posted 2 weeks ago
10.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Title: Data Scientist Location: [Insert Location] Experience: 5–10 years (flexible based on expertise) Employment Type: Full-Time Compensation: [Insert Budget / Competitive as per industry standards] About the Role: We are looking for a highly skilled and innovative Data Scientist with deep expertise in Machine Learning, AI, and Cloud Technologies to join our dynamic analytics team. The ideal candidate will have hands-on experience in NLP, LLMs, Computer Vision , and advanced statistical techniques, along with the ability to lead cross-functional teams and drive data-driven strategies in a fast-paced environment. Key Responsibilities: Develop and deploy end-to-end machine learning pipelines including data preprocessing, modeling, evaluation, and production deployment. Work on cutting-edge AI/ML applications such as LLM-finetuning, NLP, Computer Vision, Hybrid Recommendation Systems , and RAG/CAG techniques . Leverage platforms like AWS (SageMaker, EC2) and Databricks for scalable model development and deployment. Handle data at scale using Spark, Python, SQL , and integrate with NoSQL and Vector Databases (Neo4j, Cassandra) . Design interactive dashboards and visualizations using Tableau for actionable insights. Collaborate with cross-functional stakeholders to translate business problems into analytical solutions. Guide data curation efforts and ensure high-quality training datasets for supervised and unsupervised learning. Lead initiatives around AutoML, XGBoost, Topic Modeling (LDA/LSA), Doc2Vec , and Object Detection & Tracking . Drive agile practices including Sprint Planning, Resource Allocation, and Change Management . Communicate results and recommendations effectively to executive leadership and business teams. Mentor junior team members and foster a culture of continuous learning and innovation. Technical Skills Required: Programming: Python, SQL, Spark Machine Learning & AI: NLP, LLMs, Deep Learning, Computer Vision, Hybrid Recommenders Techniques: RAG, CAG, LLM-Finetuning, Statistical Modeling, AutoML, Doc2Vec Data Platforms: AWS (SageMaker, EC2), Databricks Databases: SQL, NoSQL, Neo4j, Cassandra, Vector DBs Visualization Tools: Tableau Certifications (Preferred): IBM Data Science Specialization Deep Learning Nanodegree (Udacity) SAFe® DevOps Practitioner Certified Agile Scrum Master Professional Competencies: Proven experience in team leadership, stakeholder management , and strategic planning . Strong cross-functional collaboration and ability to drive alignment across product, engineering, and analytics teams. Excellent problem-solving, communication, and decision-making skills. Ability to manage conflict resolution, negotiation , and performance optimization within teams.
Posted 2 weeks ago
5.0 years
0 Lacs
Kanayannur, Kerala, India
On-site
At EY, we’re all in to shape your future with confidence. We’ll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world. Cognitive Computing Engineer Required Skills 5+ years of Experience in AWS Cognitive Services (Relevant) Experience in AWS Apps development and Deployments. Ability to evaluate cloud application requirements and make architectural recommendations for implementation, deployment, and provisioning applications on AWS Familiarity with AWS CLI, AWS APIs, AWS CloudFormation templates, the AWS Billing Console, and the AWS Management Console AWS SageMaker: End-to-end ML lifecycle management. AWS Lambda & Step Functions: For serverless cognitive workflows. Amazon Comprehend, Rekognition, and Transcribe: For text, image, and speech analysis. IAM & Security: Deep knowledge of AWS Identity and Access Management, encryption (KMS), and compliance. Strong Understanding of Bedrock and Integrating with models Good Experience in React Based Front end Development and Integrations. JSX & Components: Writing reusable functional and class components. Hooks: Deep understanding of useState, useEffect, useContext, useReducer, and custom hooks. State Management: Using Context API, Redux Toolkit, Zustand, or Recoil. Soft Skills Excellent Communication Skills Team Player Self-starter and highly motivated Ability to handle high pressure and fast paced situations Excellent presentation skills Ability to work with globally distributed teams Roles and Responsibilities: Understand existing application architecture and solution design Design individual components and develop the components Work with other architects, leads, team members in an agile scrum environment Hands on development Design and develop applications that can be hosted on Azure cloud Design and develop framework and core functionality Identify the gaps and come up with working solutions Understand enterprise application design framework and processes Lead or Mentor junior and/or mid-level developers Review code and establish best practices Look out for latest technologies and match up with EY use case and solve business problems efficiently Ability to look at the big picture Proven experience in designing highly secured and scalable web applications on Azure cloud Keep management up to date with the progress Work under Agile design, development framework Good hands on development experience required EY | Building a better working world EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets. Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow. EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.
Posted 2 weeks ago
5.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Company Description Quantanite is a business process outsourcing (BPO) and customer experience (CX) solutions company that helps fast-growing companies and leading global brands to transform and grow. We do this through a collaborative and consultative approach, rethinking business processes and ensuring our clients employ the optimal mix of automation and human intelligence. We’re an ambitious team of professionals spread across four continents and looking to disrupt our industry by delivering seamless customer experiences for our clients, backed up with exceptional results. We have big dreams and are constantly looking for new colleagues to join us who share our values, passion, and appreciation for diversity Job Description About the Role We are seeking a highly skilled Senior AI Engineer with deep expertise in Agentic frameworks, Large Language Models (LLMs), Retrieval-Augmented Generation (RAG) systems, MLOps/LLMOps, and end-to-end GenAI application development. In this role, you will design, develop, fine-tune, deploy, and optimize state-of-the-art AI solutions across diverse enterprise use cases including AI Copilots, Summarization, Enterprise Search, and Intelligent Tool Orchestration. Key Responsibilities Develop and Fine-Tune LLMs (e.g., GPT-4, Claude, LLaMA, Mistral, Gemini) using instruction tuning, prompt engineering, chain-of-thought prompting, and fine-tuning techniques. Build RAG Pipelines: Implement Retrieval-Augmented Generation solutions leveraging embeddings, chunking strategies, and vector databases like FAISS, Pinecone, Weaviate, and Qdrant. Implement and Orchestrate Agents: Utilize frameworks like MCP, OpenAI Agent SDK, LangChain, LlamaIndex, Haystack, and DSPy to build dynamic multi-agent systems and serverless GenAI applications. Deploy Models at Scale: Manage model deployment using HuggingFace, Azure Web Apps, vLLM, and Ollama, including handling local models with GGUF, LoRA/QLoRA, PEFT, and Quantization methods. Integrate APIs: Seamlessly integrate with APIs from OpenAI, Anthropic, Cohere, Azure, and other GenAI providers. Ensure Security and Compliance: Implement guardrails, perform PII redaction, ensure secure deployments, and monitor model performance using advanced observability tools. Optimize and Monitor: Lead LLMOps practices focusing on performance monitoring, cost optimization, and model evaluation. Work with AWS Services: Hands-on usage of AWS Bedrock, SageMaker, S3, Lambda, API Gateway, IAM, CloudWatch, and serverless computing to deploy and manage scalable AI solutions. Contribute to Use Cases: Develop AI-driven solutions like AI copilots, enterprise search engines, summarizers, and intelligent function-calling systems. Cross-functional Collaboration: Work closely with product, data, and DevOps teams to deliver scalable and secure AI products. Qualifications 5+ years of experience in AI/ML roles, focusing on LLM agent development, data, science workflows, and system deployment. A Bachelor's degree in Computer Science, Software Engineering, or a related field Demonstrated experience in designing domain-specific AI systems and integrating structured/unstructured data into AI models. Proficiency in designing scalable solutions using LangChain and vector databases. Deep knowledge of LLMs and foundational models (GPT-4, Claude, Mistral, LLaMA, Gemini). Strong expertise in Prompt Engineering, Chain-of-Thought reasoning, and Fine-Tuning methods. Proven experience building RAG pipelines and working with modern vector stores (FAISS, Pinecone, Weaviate, Qdrant). Hands-on proficiency in LangChain, LlamaIndex, Haystack, and DSPy frameworks. Model deployment skills using HuggingFace, vLLM, Ollama, and handling LoRA/QLoRA, PEFT, GGUF models. Practical experience with AWS serverless services: Lambda, S3, API Gateway, IAM, CloudWatch. Strong coding ability in Python or similar programming languages. Experience with MLOps/LLMOps for monitoring, evaluation, and cost management. Familiarity with security standards: guardrails, PII protection, secure API interactions. Use Case Delivery Experience: Proven record of delivering AI Copilots, Summarization engines, or Enterprise GenAI applications. Additional Information Benefits At Quantanite, we ask a lot of our associates, which is why we give so much in return. In addition to your compensation, our perks include: Dress: Wear anything you like to the office. We want you to feel as comfortable as when working from home. Employee Engagement: Experience our family community and embrace our culture where we bring people together to laugh and celebrate our achievements. Professional development: We love giving back and ensure you have opportunities to grow with us and even travel on occasion. Events: Regular team and organisation-wide get-togethers and events. Value orientation: Everything we do at Quantanite is informed by our Purpose and Values. We Build Better. Together. Future development: At Quantanite, you’ll have a personal development plan to help you improve in the areas you’re looking to develop over the coming years. Your manager will dedicate time and resources to supporting you in getting you to the next level. You’ll also have the opportunity to progress internally. As a fast-growing organization, our teams are growing, and you’ll have the chance to take on more responsibility over time. So, if you’re looking for a career full of purpose and potential, we’d love to hear from you!
Posted 2 weeks ago
6.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Who We Are: Netcore Cloud is a MarTech platform helping businesses design, execute, and optimize campaigns across multiple channels. With a strong focus on leveraging data, machine learning, and AI, we empower our clients to make smarter marketing decisions and deliver exceptional customer experiences. Our team is passionate about innovation and collaboration, and we are looking for a talented Senior Data Scientist to join our team and work on impactful, high-scale projects. Role Summary: As a Senior Data Scientist, you’ll work on developing, deploying, and optimizing advanced machine learning solutions across a range of problem statements such as personalization, customer segmentation, predictive modeling, and NLP. This is a high-impact role that will allow you to collaborate across cross-functional teams and contribute directly to our product innovation roadmap. What You'll Do: Model Development and Innovation : Design and implement ML models for problems including recommendation engines, user segmentation, conversion prediction, and NLP-based automation Stay on top of the latest research in AI/ML and evaluate new approaches (e.g., LLMs, generative AI) for practical application. Optimise model performance, scalability, and interpretability for production systems. MLOps & Deployment Contribute to the deployment of models into production environments using MLOps best practices. Leverage platforms such as AWS SageMaker or Google Vertex AI for training, tuning, and scaling models. Data & Experimentation Work with large-scale, real-time datasets to derive actionable insights and build data pipelines for ML training and evaluation. Design and run experiments (A/B testing, uplift modeling, etc.) to validate hypotheses and improve product KPIs. Technology and Tools : Work with large-scale datasets, ensuring data quality and scalability of solutions. Leverage cloud platforms like AWS, GCP for model training and deployment. Utilize tools and libraries such as Python, TensorFlow, PyTorch, Scikit-learn, and Spark for development. With so much innovation happening around Gen AI and LLMs, we prefer folks who have already exposed themselves to this exciting opportunity via AWS Bedrock or Google Vertex. Cross-functional Collaboration Partner with product, engineering, and marketing teams to understand business requirements and translate them into data science solutions. Who You Are: Education : Bachelors from Tier 1 premier institute in relevant field, or; Master’s or PhD in Computer Science, Data Science, Mathematics, or a related field. Experience : 4–6 years of hands-on experience in machine learning or data science roles. Proven expertise in machine learning, deep learning, NLP, and recommendation systems. Hands-on experience deploying ML models in production at scale. Experience in a product-focused or customer-facing domain such as Martech, Adtech, or B2B SaaS is a plus. Technical Skills : Proficiency in Python, SQL, and ML frameworks like TensorFlow or PyTorch. Strong understanding of statistical methods, predictive modeling, and algorithm design. Familiarity with cloud-based solutions (AWS Sagemaker, GCP AI Platform, or similar). Soft Skills : Strong analytical and problem-solving mindset. Excellent communication skills to articulate data-driven insights. A passion for innovation and staying up-to-date with the latest trends in AI/ML. Why Join Us: Opportunity to work on cutting-edge AI/ML projects impacting millions of users. Be part of a collaborative, innovation-driven team in a fast-growing Martech company. Competitive compensation and growth opportunities in a fast-paced environment. We’d love to hear how your background can contribute to building the next generation of intelligent marketing solutions. Apply now or reach out for a conversation. :)
Posted 2 weeks ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About the Role: We are seeking a highly skilled and experienced Machine Learning Engineer to join our dynamic team. As a Machine Learning Engineer, you will be responsible for the design, development, deployment, and maintenance of machine learning models and systems that drive our [mention specific business area or product, e.g., recommendation engine, fraud detection system, autonomous vehicles]. You will work closely with data scientists, software engineers, and product managers to translate business needs into scalable and reliable machine learning solutions. This is a key role in shaping the future of CBRE and requires a strong technical foundation combined with a passion for innovation and problem-solving. Responsibilities: Model Development & Deployment: Design, develop, and deploy machine learning models using various algorithms (e.g., regression, classification, clustering, deep learning) to solve complex business problems. Select appropriate datasets and features for model training, ensuring data quality and integrity. Implement and optimize model training pipelines, including data preprocessing, feature engineering, model selection, and hyperparameter tuning. Deploy models to production environments using containerization technologies (e.g.,Docker, Kubernetes) and cloud platforms (e.g., AWS, GCP, Azure). Monitor model performance in production, identify and troubleshoot issues, and implement model retraining and updates as needed. Infrastructure & Engineering: Develop and maintain APIs for model serving and integration with other systems. Write clean, well-documented, and testable code. Collaborate with software engineers to integrate models into existing products and services. Research & Innovation : Stay up to date with the latest advancements in machine learning and related technologies. Research and evaluate new algorithms, tools, and techniques to improve model performance and efficiency. Contribute to the development of new machine learning solutions and features. Proactively identify opportunities to leverage machine learning to solve business challenges. Collaboration & Communication: * Collaborate effectively with data scientists, software engineers, product managers, and other stakeholders. * Communicate technical concepts and findings clearly and concisely to both technical and non-technical audiences. * Participate in code reviews and contribute to the team's knowledge sharing. Qualifications: * Experience : 7+ years of experience in machine learning engineering or a related field. Technical Skills: Programming Languages : Proficient in Python and experience with other languages (e.g., Java, Scala, R) is a plus. Machine Learning Libraries : Strong experience with machine learning libraries and frameworks such as scikit-learn, TensorFlow, PyTorch, Keras, etc. Data Processing : Experience with data manipulation and processing using libraries like Pandas, NumPy, and Spark. Model Deployment : Experience with model deployment frameworks and platforms (e.g., TensorFlow Serving, TorchServe, Seldon, AWS SageMaker, Google AI Platform, Azure Machine Learning). Databases : Experience with relational and NoSQL databases (e.g., SQL, MongoDB, Cassandra). Version Control : Experience with Git and other version control systems. DevOps : Familiarity with DevOps practices and tools. Strong understanding of machine learning concepts and algorithms : Regression, Classification, Clustering, Deep Learning etc. Soft Skills: Excellent problem-solving and analytical skills. Strong communication and collaboration skills.
Posted 2 weeks ago
7.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Your Role The technology that once promised to simplify patient care has brought more issues than anyone ever anticipated. At Innovaccer, we defeat this beast by making full use of all the data Healthcare has worked so hard to collect, and replacing long-standing problems with ideal solutions. Data is our bread and butter for innovation. We are looking for a Staff Data Scientist who understands healthcare data and can leverage the data to build algorithms to personalize treatments based on the clinical and behavioral history of patients. We are looking for a superstar who will define and build the next generation of predictive analytics tools in healthcare. Analytics at Innovaccer Our analytics team is dedicated to weaving analytics and data science magics across our products. They are the owners and custodians of intelligence behind our products. With their expertise and innovative approach, they play a crucial role in building various analytical models (including descriptive, predictive, and prescriptive) to help our end-users make smart decisions. Their focus on continuous improvement and cutting-edge methodologies ensures that they're always creating market leading solutions that propel our products to new heights of success A Day in the Life Design and lead the development of various artificial intelligence initiatives to help improve health and wellness of patients Work with the business leaders and customers to understand their pain-points and build large-scale solutions for them Define technical architecture to productize Innovaccer's machine-learning algorithms and take them to market with partnerships with different organizations Proven ability to break down complex business problems into machine learning problems and design solution workflows Work with our data platform and applications team to help them successfully integrate the data science capability or algorithms in their product/workflows Work with development teams to build tools for repeatable data tasks that will accelerate and automate development cycle Define and execute on the quarterly roadmap What You Need Masters in Computer Science, Computer Engineering or other relevant fields (PhD Preferred) 7+ years of experience in Data Science (healthcare experience will be a plus) Strong written and spoken communication skills Strong hands-on experience in Python - building enterprise applications alongwith optimization techniques Strong experience with deep learning techniques to build NLP/Computer vision models as well as state of art GenAI pipelines - knowledge of implementing agentic workflows is a plus Has demonstrable experience deploying deep learning models in production at scale with interactive improvements- would require hands-on expertise with at least 1 deep learning frameworks like Pytorch or Tensorflow Has keen interest in research and stays updated with key advancements in the area of AI and ML in the industry Deep understanding of classical ML techniques - Random Forests, SVM, Boosting, Bagging - and building training and evaluation pipelines Demonstrate experience with global and local model explainability using LIME, SHAP and associated techniques Hands on experience with at least one ML platform among Databricks, Azure ML, Sagemaker s Experience in developing and deploying production ready models Knowledge of implementing an MLOps framework Possess a customer-focused attitude through conversations and documentation We offer competitive benefits to set you up for success in and outside of work. Here's What We Offer Generous Leave Benefits: Enjoy generous leave benefits of up to 40 days Parental Leave: Experience one of the industry's best parental leave policies to spend time with your new addition Sabbatical Leave Policy: Want to focus on skill development, pursue an academic career, or just take a break? We've got you covered Health Insurance: We offer health benefits and insurance to you and your family for medically related expenses related to illness, disease, or injury Pet-Friendly Office*: Spend more time with your treasured friends, even when you're away from home. Bring your furry friends with you to the office and let your colleagues become their friends, too. *Noida office only Creche Facility for children*: Say goodbye to worries and hello to a convenient and reliable creche facility that puts your child's well-being first. *India offices Where And How We Work Our Noida office is situated in a posh techspace, equipped with various amenities to support our work environment. Here, we follow a five-day work schedule, allowing us to efficiently carry out our tasks and collaborate effectively within our team. Innovaccer is an equal-opportunity employer. We celebrate diversity, and we are committed to fostering an inclusive and diverse workplace where all employees, regardless of race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, marital status, or veteran status, feel valued and empowered. Disclaimer: Innovaccer does not charge fees or require payment from individuals or agencies for securing employment with us. We do not guarantee job spots or engage in any financial transactions related to employment. If you encounter any posts or requests asking for payment or personal information, we strongly advise you to report them immediately to our Px department at px@innovaccer.com. Additionally, please exercise caution and verify the authenticity of any requests before disclosing personal and confidential information, including bank account details. About Innovaccer Innovaccer Inc. is the data platform that accelerates innovation. The Innovaccer platform unifies patient data across systems and care settings, and empowers healthcare organizations with scalable, modern applications that improve clinical, financial, operational, and experiential outcomes. Innovaccer's EPx-agnostic solutions have been deployed across more than 1,600 hospitals and clinics in the US, enabling care delivery transformation for more than 96,000 clinicians, and helping providers work collaboratively with payers and life sciences companies. Innovaccer has helped its customers unify health records for more than 54 million people and generate over $1.5 billion in cumulative cost savings. The Innovaccer platform is the #1 rated Best-in-KLAS data and analytics platform by KLAS, and the #1 rated population health technology platform by Black Book. For more information, please visit innovaccer.com. Check us out on YouTube, Glassdoor, LinkedIn, and innovaccer.com.
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
PwC AC is hiring for Data scientist Apply and get a chance to work with one of the Big4 companies #PwC AC. Job Tit le : Data scientist Years of Experienc e: 3-7 years Shift Timin gs: 11AM-8PM Qualificati on: Graduate and above(Full time) About PwC CTIO – AI Engineering PwC’s Commercial Technology and Innovation Office (CTIO) is at the forefront of emerging technology, focused on building transformative AI-powered products and driving enterprise innovation. The AI Engineering team within CTIO is dedicated to researching, developing, and operationalizing cutting-edge technologies such as Generative AI, Large Language Models (LLMs), AI Agents, and more. Our mission is to continuously explore what's next—enabling business transformation through scalable AI/ML solutions while remaining grounded in research, experimentation, and engineering excellence. Role Overview We are seeking a Senior Associate – Data Science/ML/DL/GenAI to join our high-impact, entrepreneurial team. This individual will play a key role in designing and delivering scalable AI applications, conducting applied research in GenAI and deep learning, and contributing to the team’s innovation agenda. This is a hands-on, technical role ideal for professionals passionate about AI-driven transformation. Key Responsibilities Design, develop, and deploy machine learning, deep learning, and Generative AI solutions tailored to business use cases. Build scalable pipelines using Python (and frameworks such as Flask/FastAPI) to operationalize data science models in production environments. Prototype and implement solutions using state-of-the-art LLM frameworks such as LangChain, LlamaIndex, LangGraph, or similar. Also developing applications in streamlit/chainlit for demo purposes. Design advanced prompts and develop agentic LLM applications that autonomously interact with tools and APIs. Fine-tune and pre-train LLMs (HuggingFace and similar libraries) to align with business objectives. Collaborate in a cross-functional setup with ML engineers, architects, and product teams to co-develop AI solutions. Conduct R&D in NLP, CV, and multi-modal tasks, and evaluate model performance with production-grade metrics. Stay current with AI research and industry trends; continuously upskill to integrate the latest tools and methods into the team’s work. Required Skills & Experience 3 to 7 years of experience in Data Science/ML/AI roles. Bachelor’s degree in Computer Science, Engineering, or equivalent technical discipline (BE/BTech/MCA). Proficiency in Python and related data science libraries: Pandas, NumPy, SciPy, Scikit-learn, TensorFlow, PyTorch, Keras, etc. Hands-on experience with Generative AI , including prompt engineering, LLM fine-tuning, and deployment. Experience with Agentic LLMs and task orchestration using tools like LangGraph or AutoGPT-like flows. Strong knowledge of NLP techniques, transformer architectures, and text analysis. Proven experience working with cloud platforms (preferably Azure; AWS/GCP also considered). Understanding of production-level AI systems including CI/CD, model monitoring, and cloud-native architecture. (Need not develop from scratch) Familiarity with ML algorithms: XGBoost, GBM, k-NN, SVM, Decision Forests, Naive Bayes, Neural Networks, etc. Exposure to deploying AI models via APIs and integration into larger data ecosystems. Strong understanding of model operationalization and lifecycle management. Good to Have Experience with Docker, Kubernetes, and containerized deployments for ML workloads. Use of MLOps tooling and pipelines (e.g., MLflow, Azure ML, SageMaker, etc.). Experience in full-stack AI applications, including visualization (e.g., PowerBI, D3.js). Demonstrated track record of delivering AI-driven solutions as part of large-scale systems. Soft Skills & Team Expectations Strong written and verbal communication; able to explain complex models to business stakeholders. Ability to independently document work, manage requirements, and self-drive technical discovery. Desire to innovate, improve, and automate existing processes and solutions. Active contributor to team knowledge sharing, technical forums, and innovation drives. Strong interpersonal skills to build relationships across cross-functional teams. A mindset of continuous learning and technical curiosity. Preferred Certifications (at least two are preferred) Certifications in Machine Learning, Deep Learning, or Natural Language Processing. Python programming certifications (e.g., PCEP/PCAP). Cloud certifications (Azure/AWS/GCP) such as Azure AI Engineer, AWS ML Specialty, etc. Why Join PwC CTIO? Be part of a mission-driven AI innovation team tackling industry-wide transformation challenges. Gain exposure to bleeding-edge GenAI research, rapid prototyping, and product development. Contribute to a diverse portfolio of AI solutions spanning pharma, finance, and core business domains. Operate in a startup-like environment within the safety and structure of a global enterprise. Accelerate your career as a deep tech leader in an AI-first future.
Posted 2 weeks ago
7.0 - 12.0 years
22 - 25 Lacs
India
On-site
TECHNICAL ARCHITECT Key Responsibilities 1. Designing technology systems: Plan and design the structure of technology solutions, and work with design and development teams to assist with the process. 2. Communicating: Communicate system requirements to software development teams, and explain plans to developers and designers. They also communicate the value of a solution to stakeholders and clients. 3. Managing Stakeholders: Work with clients and stakeholders to understand their vision for the systems. Should also manage stakeholder expectations. 4. Architectural Oversight: Develop and implement robust architectures for AI/ML and data science solutions, ensuring scalability, security, and performance. Oversee architecture for data-driven web applications and data science projects, providing guidance on best practices in data processing, model deployment, and end-to-end workflows. 5. Problem Solving: Identify and troubleshoot technical problems in existing or new systems. Assist with solving technical problems when they arise. 6. Ensuring Quality: Ensure if systems meet security and quality standards. Monitor systems to ensure they meet both user needs and business goals. 7. Project management: Break down project requirements into manageable pieces of work, and organise the workloads of technical teams. 8. Tool & Framework Expertise: Utilise relevant tools and technologies, including but not limited to LLMs, TensorFlow, PyTorch, Apache Spark, cloud platforms (AWS, Azure, GCP), Web App development frameworks and DevOps practices. 9. Continuous Improvement: Stay current on emerging technologies and methods in AI, ML, data science, and web applications, bringing insights back to the team to foster continuous improvement. Technical Skills 1. Proficiency in AI/ML frameworks such as TensorFlow, PyTorch, Keras, and scikit-learn for developing machine learning and deep learning models. 2. Knowledge or experience working with self-hosted or managed LLMs. 3. Knowledge or experience with NLP tools and libraries (e.g., SpaCy, NLTK, Hugging Face Transformers) and familiarity with Computer Vision frameworks like OpenCV and related libraries for image processing and object recognition. 4. Experience or knowledge in back-end frameworks (e.g., Django, Spring Boot, Node.js, Express etc.) and building RESTful and GraphQL APIs. 5. Familiarity with microservices, serverless, and event-driven architectures. Strong understanding of design patterns (e.g., Factory, Singleton, Observer) to ensure code scalability and reusability. 6. Proficiency in modern front-end frameworks such as React, Angular, or Vue.js, with an understanding of responsive design, UX/UI principles, and state management (e.g., Redux) 7. In-depth knowledge of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB, Cassandra), as well as caching solutions (e.g., Redis, Memcached). 8. Expertise in tools such as Apache Spark, Hadoop, Pandas, and Dask for large-scale data processing. 9. Understanding of data warehouses and ETL tools (e.g., Snowflake, BigQuery, Redshift, Airflow) to manage large datasets. 10. Familiarity with visualisation tools (e.g., Tableau, Power BI, Plotly) for building dashboards and conveying insights. 11. Knowledge of deploying models with TensorFlow Serving, Flask, FastAPI, or cloud-native services (e.g., AWS SageMaker, Google AI Platform). 12. Familiarity with MLOps tools and practices for versioning, monitoring, and scaling models (e.g., MLflow, Kubeflow, TFX). 13. Knowledge or experience in CI/CD, IaC and Cloud Native toolchains. 14. Understanding of security principles, including firewalls, VPC, IAM, and TLS/SSL for secure communication. 15. Knowledge of API Gateway, service mesh (e.g., Istio), and NGINX for API security, rate limiting, and traffic management. Experience Required Technical Architect with 7 - 12 years of experience Salary 22-25 LPA Job Types: Full-time, Permanent Pay: ₹2,200,000.00 - ₹2,500,000.00 per year Work Location: In person
Posted 2 weeks ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Hi... We are looking for Data Scientist with Artificail Intelligence & machine Learning Work Location: Only Hyderabad Exp Range: 4 to 8 Yrs Design and build intelligent agents ( RAG , task agents, decision bots) for use in credit, customer service, or analytics workflows. – Finance Domain Deploy and manage AI models in production using AWS AI/ML services (SageMaker, Lambda, Bedrock, etc.). Work with Python and SQL to preprocess, transform, and analyze large volumes of structured and semi-structured data. Collaborate with data scientists, data engineers, and business stakeholders to convert ML prototypes into scalable services. Automate the lifecycle of AI/ML solutions using MLOps practices (model versioning, CI/CD, model monitoring). Leverage vector databases (like Pinecone or OpenSearch) and foundation models to build conversational or retrieval-based solutions. Ensure proper governance, logging, and testing of AI solutions in line with RBI and internal guidelines.
Posted 2 weeks ago
8.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Role Description Role Proficiency: Leverage expertise in a technology area (e.g. Infromatica Transformation Terradata data warehouse Hadoop Analytics) Responsible for Architecture for a small/mid-size projects. Outcomes Implement either data extract and transformation a data warehouse (ETL Data Extracts Data Load Logic Mapping Work Flows stored procedures data warehouse) data analysis solution data reporting solutions or cloud data tools in any one of the cloud providers(AWS/AZURE/GCP) Understand business workflows and related data flows. Develop design for data acquisitions and data transformation or data modelling; applying business intelligence on data or design data fetching and dashboards Design information structure work-and dataflow navigation. Define backup recovery and security specifications Enforce and maintain naming standards and data dictionary for data models Provide or guide team to perform estimates Help team to develop proof of concepts (POC) and solution relevant to customer problems. Able to trouble shoot problems while developing POCs Architect/Big Data Speciality Certification in (AWS/AZURE/GCP/General for example Coursera or similar learning platform/Any ML) Measures Of Outcomes Percentage of billable time spent in a year for developing and implementing data transformation or data storage Number of best practices documented in any new tool and technology emerging in the market Number of associates trained on the data service practice Outputs Expected Strategy & Planning: Create or contribute short-term tactical solutions to achieve long-term objectives and an overall data management roadmap Implement methods and procedures for tracking data quality completeness redundancy and improvement Ensure that data strategies and architectures meet regulatory compliance requirements Begin engaging external stakeholders including standards organizations regulatory bodies operators and scientific research communities or attend conferences with respect to data in cloud Operational Management Help Architects to establish governance stewardship and frameworks for managing data across the organization Provide support in implementing the appropriate tools software applications and systems to support data technology goals Collaborate with project managers and business teams for all projects involving enterprise data Analyse data-related issues with systems integration compatibility and multi-platform integration Project Control And Review Provide advice to teams facing complex technical issues in the course of project delivery Define and measure project and program specific architectural and technology quality metrics Knowledge Management & Capability Development Publish and maintain a repository of solutions best practices and standards and other knowledge articles for data management Conduct and facilitate knowledge sharing and learning sessions across the team Gain industry standard certifications on technology or area of expertise Support technical skill building (including hiring and training) for the team based on inputs from project manager /RTE’s Mentor new members in the team in technical areas Gain and cultivate domain expertise to provide best and optimized solution to customer (delivery) Requirement Gathering And Analysis Work with customer business owners and other teams to collect analyze and understand the requirements including NFRs/define NFRs Analyze gaps/ trade-offs based on current system context and industry practices; clarify the requirements by working with the customer Define the systems and sub-systems that define the programs People Management Set goals and manage performance of team engineers Provide career guidance to technical specialists and mentor them Alliance Management Identify alliance partners based on the understanding of service offerings and client requirements In collaboration with Architect create a compelling business case around the offerings Conduct beta testing of the offerings and relevance to program Technology Consulting In collaboration with Architects II and III analyze the application and technology landscapers process and tolls to arrive at the architecture options best fit for the client program Analyze Cost Vs Benefits of solution options Support Architects II and III to create a technology/ architecture roadmap for the client Define Architecture strategy for the program Innovation And Thought Leadership Participate in internal and external forums (seminars paper presentation etc) Understand clients existing business at the program level and explore new avenues to save cost and bring process efficiency Identify business opportunities to create reusable components/accelerators and reuse existing components and best practices Project Management Support Assist the PM/Scrum Master/Program Manager to identify technical risks and come-up with mitigation strategies Stakeholder Management Monitor the concerns of internal stakeholders like Product Managers & RTE’s and external stakeholders like client architects on Architecture aspects. Follow through on commitments to achieve timely resolution of issues Conduct initiatives to meet client expectations Work to expand professional network in the client organization at team and program levels New Service Design Identify potential opportunities for new service offerings based on customer voice/ partner inputs Conduct beta testing / POC as applicable Develop collaterals guides for GTM Skill Examples Use data services knowledge creating POC to meet a business requirements; contextualize the solution to the industry under guidance of Architects Use technology knowledge to create Proof of Concept (POC) / (reusable) assets under the guidance of the specialist. Apply best practices in own area of work helping with performance troubleshooting and other complex troubleshooting. Define decide and defend the technology choices made review solution under guidance Use knowledge of technology t rends to provide inputs on potential areas of opportunity for UST Use independent knowledge of Design Patterns Tools and Principles to create high level design for the given requirements. Evaluate multiple design options and choose the appropriate options for best possible trade-offs. Conduct knowledge sessions to enhance team's design capabilities. Review the low and high level design created by Specialists for efficiency (consumption of hardware memory and memory leaks etc.) Use knowledge of Software Development Process Tools & Techniques to identify and assess incremental improvements for software development process methodology and tools. Take technical responsibility for all stages in the software development process. Conduct optimal coding with clear understanding of memory leakage and related impact. Implement global standards and guidelines relevant to programming and development come up with 'points of view' and new technological ideas Use knowledge of Project Management & Agile Tools and Techniques to support plan and manage medium size projects/programs as defined within UST; identifying risks and mitigation strategies Use knowledge of Project Metrics to understand relevance in project. Collect and collate project metrics and share with the relevant stakeholders Use knowledge of Estimation and Resource Planning to create estimate and plan resources for specific modules or small projects with detailed requirements or user stories in place Strong proficiencies in understanding data workflows and dataflow Attention to details High analytical capabilities Knowledge Examples Data visualization Data migration RDMSs (relational database management systems SQL Hadoop technologies like MapReduce Hive and Pig. Programming languages especially Python and Java Operating systems like UNIX and MS Windows. Backup/archival software. Additional Comments AI Architect Role Summary: Hands-on AI Architect with strong expertise in Deep Learning, Generative AI, and real-world AI/ML systems. The role involves leading the architecture, development, and deployment of AI agent-based solutions, supporting initiatives such as intelligent automation, anomaly detection, and GenAI-powered assistants across enterprise operations and engineering. This is a hands-on role ideal for someone who thrives in fast-paced environments, is passionate about AI innovations, and can adapt across multiple opportunities based on business priorities. Key Responsibilities: Design and architect AI-based solutions including multi-agent GenAI systems using LLMs and RAG pipelines. Build POCs, prototypes, and production-grade AI components for operations, support automation, and intelligent assistants. Lead end-to-end development of AI agents for use cases such as triage, RCA automation, and predictive analytics. Leverage GenAI (LLMs) and Time Series models to drive intelligent observability and performance management. Work closely with product, engineering, and operations teams to align solutions with domain and customer needs. Own model lifecycle from experimentation to deployment using modern MLOps and LLMOps practices. Ensure scalable, secure, and cost-efficient implementation across AWS and Azure cloud environments. Key Skills & Technology Areas: AI/ML Expertise: 8+ years in AI/ML, with hands-on experience in deep learning, model deployment, and GenAI. LLMs & Frameworks: GPT-3+, Claude, LLAMA3, LangChain, LangGraph, Transformers (BERT, T5), RAG pipelines, LLMOps. Programming: Python (advanced), Keras, PyTorch, Pandas, FastAPI, Celery (for agent orchestration), Redis. Modeling & Analytics: Time Series Forecasting, Predictive Modeling, Synthetic Data Generation. Data & Storage: ChromaDB, Pinecone, FAISS, DynamoDB, PostgreSQL, Azure Synapse, Azure Data Factory. Cloud & Tools: o AWS (Bedrock, SageMaker, Lambda), o Azure (Azure ML, Azure Databricks, Synapse), o GCP (Vertex AI – optional) Observability Integration: Splunk, ELK Stack, Prometheus. DevOps/MLOps: Docker, GitHub Actions, Kubernetes, CI/CD pipelines, model monitoring & versioning. Architectural Patterns: Microservices, Event-Driven Architecture, Multi-Agent Systems, API-first Design. Other Requirements: Proven ability to work independently and collaboratively in agile, innovation-driven teams. Strong problem-solving mindset and product-oriented thinking. Excellent communication and technical storytelling skills. Flexibility to work across multiple opportunities based on business priorities. Experience in Telecom, E- Commerce, or Enterprise IT Operations is a plus. ________________________________________ ________________________________________ ________________________________________ Skills python,pandas,AIML,GENAI
Posted 2 weeks ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
osition: Data Scientist Location: Chennai, India (Work from Office) Experience: 2–5 years About the Opportunity: Omnihire is seeking a Data Scientist to join a leading AI-driven data‐solutions company. As part of the Data Consulting team, you’ll collaborate with scientists, IT, and engineering to solve high-impact problems and deliver actionable insights. Key Responsibilities: Analyze large, structured and unstructured datasets (SQL, Hadoop/Spark) to extract business-critical insights Build and validate statistical models (regression, classification, time-series, segmentation) and machine-learning algorithms (Random Forest, Boosting, SVM, KNN) Develop deep-learning solutions (CNN, RNN, LSTM, transfer learning) and apply NLP techniques (tokenization, stemming/lemmatization, NER, LSA) Write production-quality code in Python and/or R using libraries (scikit-learn, TensorFlow/PyTorch, pandas, NumPy, NLTK/spaCy) Collaborate with cross-functional teams to scope requirements, propose analytics solutions, and present findings via clear visualizations (Power BI, Matplotlib) Own end-to-end ML pipelines: data ingestion → preprocessing → feature engineering → model training → evaluation → deployment Contribute to solution proposals and maintain documentation for data schemas, model architectures, and experiment tracking (Git, MLflow) Required Qualifications: Bachelor’s or Master’s in Computer Science, Statistics, Mathematics, Data Science, or a related field 2–5 years of hands-on experience as a Data Scientist (or similar) in a data-driven environment Proficiency in Python and/or R for statistical modeling and ML Strong SQL skills and familiarity with Big Data platforms (e.g., Hadoop, Apache Spark) Demonstrated experience building, validating, and deploying ML/DL models in production or staging Excellent problem-solving skills, attention to detail, and ability to communicate technical concepts clearly Self-starter who thrives in a collaborative, Agile environment Nice-to-Have: Active GitHub/Kaggle portfolio showcasing personal projects or contributions Exposure to cloud-based ML services (Azure ML Studio, AWS SageMaker) and containerization (Docker) Familiarity with advanced NLP frameworks (e.g., Hugging Face Transformers) or production monitoring tools (Azure Monitor, Prometheus) Why Join? Work on high-impact AI/ML projects that drive real business value Rapid skill development with exposure to cutting-edge technologies Collaborative, Agile culture with mentorship from senior data scientists Competitive compensation package and comprehensive benefits
Posted 2 weeks ago
0 years
0 Lacs
India
On-site
About Netskope Today, there's more data and users outside the enterprise than inside, causing the network perimeter as we know it to dissolve. We realized a new perimeter was needed, one that is built in the cloud and follows and protects data wherever it goes, so we started Netskope to redefine Cloud, Network and Data Security. Since 2012, we have built the market-leading cloud security company and an award-winning culture powered by hundreds of employees spread across offices in Santa Clara, St. Louis, Bangalore, London, Paris, Melbourne, Taipei, and Tokyo. Our core values are openness, honesty, and transparency, and we purposely developed our open desk layouts and large meeting spaces to support and promote partnerships, collaboration, and teamwork. From catered lunches and office celebrations to employee recognition events and social professional groups such as the Awesome Women of Netskope (AWON), we strive to keep work fun, supportive and interactive. Visit us at Netskope Careers. Please follow us on LinkedIn and Twitter@Netskope. About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing Multi-Agent & Agentic RAG workflows in production. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help build an embedded AI CoPilot across the different products at NetSkope What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Drive the end-to-end development and deployment of CoPilot, an embedded assistant, powered by cutting-edge Multi-Agent Workflows. This will involve designing and implementing complex interactions between various AI agents & tools to deliver seamless, context-aware assistance across our product suite Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps & LLMOps best practices to deploy and monitor machine learning models & agentic workflows in production. Implement comprehensive evaluation and observability strategies for the CoPilot Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Collaborate with cloud architects and security analysts to develop cloud-native security solutions x platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Has built & deployed a multi-agent or agentic RAG workflow in production. Expertise in prompt engineering patterns such as chain of thought, ReAct, zero/few shot. Experience in Langgraph/Autogen/ AWS Bedrock/ Pydantic AI/ Crew AI Strong understanding of MLOps practices and tools (e.g., Sagemaker/MLflow/ Kubeflow/ Airflow/ Dagster). Experience with evaluation & observability tools like Langfuse/ Arize Phoenix/ Langsmith. Data Engineering Proficiency in working with vector databases such as PGVector, Pinecone, and Weaviate. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Software Engineering Expertise in Python with experience in one other language (C++/Java/Go) for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Experience of building & consuming MCP clients & servers. Experience with asynchronous programming, including web-sockets, FastAPI, and Sanic. Good-to-Have Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like Pytorch, TensorFlow and Scikit-learn. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Graph database knowledge is a plus. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Netskope is committed to implementing equal employment opportunities for all employees and applicants for employment. Netskope does not discriminate in employment opportunities or practices based on religion, race, color, sex, marital or veteran statues, age, national origin, ancestry, physical or mental disability, medical condition, sexual orientation, gender identity/expression, genetic information, pregnancy (including childbirth, lactation and related medical conditions), or any other characteristic protected by the laws or regulations of any jurisdiction in which we operate. Netskope respects your privacy and is committed to protecting the personal information you share with us, please refer to Netskope's Privacy Policy for more details.
Posted 2 weeks ago
4.0 - 6.0 years
15 - 20 Lacs
Bengaluru
Work from Office
At MAXIMUS, we are growing our Digital Solutions team to better serve our organization and our clients in the government, health, and human services space. We believe that great outcomes define our success. We like to turn bold ideas into delightful solutions. We use methodology grounded in design thinking, lean, and agile to help solve complicated problems in a cost effective, rapid and precise manner. As a part of our India based AI organization, this position is responsible in design, development, implementation, and maintenance of AWS Textract-based solutions. The candidate will have following experience: 4+ years of experience in Python development. Proven experience with Amazon Textract, including hands-on experience in integrating and optimizing Textract-based solutions. Strong knowledge of AWS services, including S3, Lambda, IAM, and DynamoDB. Experience with RESTful APIs and microservices architecture. Familiarity with document processing and OCR technologies. Solid understanding of software development best practices, including version control, testing, and code review processes. Strong problem-solving skills and attention to detail. Experience with other AWS AI/ML services like SageMaker, Rekognition, Comprehend, Bedrock. Familiarity with containerization and orchestration tools such as Docker and Kubernetes. Understanding of data privacy and security best practices. Experience with Agile/Scrum methodologies. Excellent communication and collaboration skills. Ideal candidate will demonstrate: Good mix of hands-on experience in above areas. Strong analytical skills and experience working with business stakeholders to analyze business requirements. Ability to quickly pickup new technology and present value proposition. Excellent communication and technical presentation skills. Essential Duties and Responsibilities: Develop and maintain Python-based applications and scripts that utilize Amazon Textract for document data extraction. Integrate Textract with other AWS services such as S3, Lambda, and DynamoDB. Optimize and scale Textract processing workflows to handle large volumes of documents efficiently. Collaborate with data engineers and product managers to refine document processing solutions. Implement error handling and monitoring to ensure the reliability of data extraction processes. Write and maintain comprehensive documentation for the developed solutions. Stay updated on the latest features and best practices related to Amazon Textract and other AWS technologies. Collaborate with business analysts, stakeholders, and other team members to gather requirements, define project scope, and ensure alignment with business objectives. Work closely with QA teams to ensure thorough testing and validation of Appian applications. Ensure production issues are resolved in a timely manner. Work closely with Maximus Onsite Architects, Delivery Leads for solutioning of new projects and creating work estimates. Stay online and frequently collaborate with onsite stakeholders using Maximus MS Teams during 1st half of the day in EST.
Posted 2 weeks ago
4.0 - 8.0 years
4 - 8 Lacs
Hyderabad, Telangana, India
On-site
Job Description Roles & Responsibilities: Hands-on experience in implementation and performance tuning of Kinesis, Kafka, Spark or similar implementations Hands on experience with AWS technology stack and AWS AI stack including AWS Sagemaker & MLOps. Experience in Python and python frameworks (Django, Flask, Bottle), via various IDEs like PyTorch, Jupyter, Java/.Net, and other open-source libraries, building and designing REST APIs, etc. DevOps / Deployment automation using Terraform, Jenkins Knowledge of software design patterns/architecture like Micro-services, Layered pattern, etc. Passionate teammate who understands and respects personal & cultural differences Ability to work under pressure and be highly adaptable Strong written and communications skills for collaboration with various teams and upper management Solid analytical skills, especially in area of translating business requirements into technical design with a continuous focus on aligning technical roadmap with the immediate and long-term Business strategy Able to adapt and embrace change and support business strategy and vision. Requirements / Qualifications Bachelors/Masters or PhD in Computer Science, Physics, Engineering or Math. 4 - 8 years of total software development and Data platform implementation experience Hands on experience working on large-scale data science/data analytics projects Experience Implementing AWS services in a variety of distributed computing, enterprise environments. Experience with at least one of the modern distributed Machine Learning and Deep Learning frameworks such as TensorFlow, PyTorch, MxNet Caffe, and Keras. Experience building large-scale machine-learning infrastructure that have been successfully delivered to customers. 3+ years experiences developing cloud software services and an understanding of design for scalability, performance, and reliability. Ability to prototype and evaluate applications and interaction methodologies.
Posted 2 weeks ago
5.0 - 8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Purpose Manage Electro-mechanical and Fire & Safety operations to ensure the quality and deliverables in a timely and cost effective manner at all office locations of Bangalore; Role would be responsible to build, deploy, and scale machine learning and AI solutions across GMR’s verticals. This role will build and manage advanced analytics initiatives, predictive engines, and GenAI applications — with a focus on business outcomes, model performance, and intelligent automation. Reporting to the Head of Automation & AI, you will operate in a high-velocity, product-oriented environment with direct visibility of impact across airports, energy, infrastructure and enterprise functions ORGANISATION CHART Key Accountabilities Accountabilities Key Performance Indicators AI & ML Development Build and deploy models using supervised, unsupervised, and reinforcement learning techniques for use cases such as forecasting, predictive scenarios, dynamic pricing & recommendation engines, and anomaly detection, with exposure to broad enterprise functions and business Lead development of models, NLP classifiers, and GenAI-enhanced prediction engines. Design and integrate LLM-based features such as prompt pipelines, fine-tuned models, and inference architecture using Gemini, Azure OpenAI, LLama etc. Program Plan Vs Actuals End-to-End Solutioning Translate business problems into robust data science pipelines with emphasis on accuracy, explainability, and scalability. Own the full ML lifecycle — from data ingestion and feature engineering to model training, evaluation, deployment, retraining, and drift management. Program Plan Vs Actuals Cloud , ML & data Engineering Deploy production-grade models using AWS, GCP, or Azure AI platforms and orchestrate workflows using tools like Step Functions, SageMaker, Lambda, and API Gateway. Build and optimise ETL/ELT pipelines, ensuring smooth integration with BI tools (Power BI, QlikSense or similar) and business systems. Data compression and familiarity with cloud finops will be an advantage, have used some tools like kafka, apache airflow or similar 100% compliance to processes KEY ACCOUNTABILITIES - Additional Details EXTERNAL INTERACTIONS Consulting and Management Services provider IT Service Providers / Analyst Firms Vendors INTERNAL INTERACTIONS GCFO and Finance Council, Procurement council, IT council, HR Council (GHROC) GCMO/ BCMO FINANCIAL DIMENSIONS Other Dimensions EDUCATION QUALIFICATIONS Engineering Relevant Experience 5 - 8years of hands-on experience in machine learning, AI engineering, or data science, including deploying models at scale. Strong programming and modelling skills in some like Python, SQL, and ML frameworks like scikit-learn, TensorFlow, XGBoost, PyTorch. Demonstrated ability to build models using supervised, unsupervised, and reinforcement learning techniques to solve complex business problems. Technical & Platform Skills Proven experience with cloud-native ML tools: AWS SageMaker, Azure ML Studio, Google AI Platform. Familiarity with DevOps and orchestration tools: Docker, Git, Step Functions, Lambda,Google AI or similar Comfort working with BI/reporting layers, testing, and model performance dashboards. Mathematics and Statistics Linear algebra, Bayesian method, information theory, statistical inference, clustering, regression etc Collaborate with Generative AI and RPA teams to develop intelligent workflows Participate in rapid prototyping, technical reviews, and internal capability building NLP and Computer Vision Knowledge of Hugging Face Transformers, Spacy or similar NLP tools YoLO, Open CV or similar for Computer vision. COMPETENCIES Personal Effectiveness Social Awareness Entrepreneurship Problem Solving & Analytical Thinking Planning & Decision Making Capability Building Strategic Orientation Stakeholder Focus Networking Execution & Results Teamwork & Interpersonal influence
Posted 2 weeks ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: Python Developer – AI Engineer Location: Hyderabad, India Experience: 3–4 Years Employment Type: Full-Time Job Summary: We are seeking a dynamic and results-driven Python Developer with hands-on experience in AI/ML, DevOps practices, and AWS cloud services. You will be responsible for developing scalable backend services, building and deploying ML models, and supporting CI/CD pipelines in cloud-based environments. This role offers the opportunity to work on high-impact, intelligent systems within a fast-paced and innovative team in Hyderabad. Key Responsibilities: Design, develop, and maintain Python-based backend applications. Implement and manage DevOps pipelines for model and application deployment using CI/CD tools. Utilize AWS services such as EC2, Lambda, S3, SageMaker, and CloudFormation for deployment and monitoring. Collaborate with data scientists, DevOps engineers, and cloud architects to integrate models into production. Write clean, reusable, and well-documented code. Ensure performance, security, and scalability of applications in cloud environments. Monitor production systems and troubleshoot issues when necessary. Required Skills: 3–4 years of experience in Python development. Strong understanding of AI/ML concepts, model development, and deployment. Hands-on experience with machine learning libraries like Pandas, NumPy, Scikit-learn, TensorFlow, or PyTorch. Experience with DevOps tools such as Jenkins, Git, Docker, Kubernetes, or Terraform. Proficiency in AWS cloud services (e.g., EC2, Lambda, S3, SageMaker). Knowledge of RESTful APIs and microservices architecture. Familiarity with CI/CD and agile development processes.
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough