Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
0 Lacs
Noida
Remote
Company Description WNS (Holdings) Limited (NYSE: WNS), is a leading Business Process Management (BPM) company. We combine our deep industry knowledge with technology and analytics expertise to co-create innovative, digital-led transformational solutions with clients across 10 industries. We enable businesses in Travel, Insurance, Banking and Financial Services, Manufacturing, Retail and Consumer Packaged Goods, Shipping and Logistics, Healthcare, and Utilities to re-imagine their digital future and transform their outcomes with operational excellence.We deliver an entire spectrum of BPM services in finance and accounting, procurement, customer interaction services and human resources leveraging collaborative models that are tailored to address the unique business challenges of each client. We co-create and execute the future vision of 400+ clients with the help of our 44,000+ employees. Job Description Min Exp - 5-8 years Location - Remote Shift timings - 6 pm to 3 am (Night Shift) Exp with Data Drift is important Engagement & Project Overview An AI model trainer brings specialised knowledge in developing and fine-tuning machine learning models. They can ensure that your models are accurate, efficient, and tailored to your specific needs. Hiring an AI model trainer and tester can significantly enhance our data management and analytics capabilities Job Description Expertise in Model Development: Develop and fine-tune machine learning models. Ensure models are accurate, efficient, and tailored to our specific needs. 2. Quality Assurance: Rigorously evaluate models to identify and rectify errors. Maintain the integrity of our data-driven decisions through high performance and reliability. 3. Efficiency and Scalability: Streamline processes to reduce time-to-market. Scale AI initiatives and ML engineering skills effectively with dedicated model training and testing. 4. Production ML Monitoring & MLOps: Implement and maintain model monitoring pipelines to detect data drift, concept drift, and model performance degradation. Set up alerting and logging systems using tools such as Evidently AI, WhyLabs/Prometheus + Grafana or cloud-native solutions (AWS SageMaker Monitor, GCP Vertex AI, Azure Monitor ) . Collaborate with teams to integrate monitoring into CI/CD pipelines, using platforms like Kubeflow, MLflow, Airflow, and Neptune.ai. Define and manage automated retraining triggers and model versioning strategies. Ensure observability and traceability across the ML lifecycle in production environments. Qualifications Qualifications: 5+ years of experience in the respective field. Proven experience in developing and fine-tuning machine learning models. Strong background in quality assurance and model testing. Ability to streamline processes and scale AI initiatives. Innovative mindset with a keen understanding of industry trends. License/Certification/Registration
Posted 3 weeks ago
8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Technical skills and competencies Must have Technical skills Requirements • Overall 8+ years of experience, out of which in 4+ in AI, ML and Gen AI and related technologies • Proven track record of leading and scaling AI/ML teams and initiatives. • Strong understanding and hands-on experience in AI, ML, Deep Learning, and Generative AI concepts and applications. • Expertise in ML frameworks such as PyTorch and/or TensorFlow • Experience with ONNX runtime, model optimization and hyperparameter tuning. • Solid Experience of DevOps, SDLC, CI/CD, and MLOps practices - DevOps/MLOps Tech Stack: Docker, Kubernetes, Jenkins, Git, CI/CD, RabbitMQ, Kafka, Spark, Terraform, Ansible, Prometheus, Grafana, ELK stack • Experience in production-level deployment of AI models at enterprise scale. • Proficiency in data preprocessing, feature engineering, and large-scale data handling. • Expertise in image and video processing, object detection, image segmentation, and related CV tasks. • Proficiency in text analysis, sentiment analysis, language modeling, and other NLP applications. • Experience with speech recognition, audio classification, and general signal processing techniques. • Experience with RAG, VectorDB, GraphDB and Knowledge Graphs • Extensive experience with major cloud platforms (AWS, Azure, GCP) for AI/ML deployments. Proficiency in using and integrating cloud-based AI services and tools (e.g., AWS SageMaker, Azure ML, Google Cloud AI). Soft Skills • Strong leadership and team management skills. • Excellent verbal and written communication skills. • Strategic thinking and problem-solving abilities. • Adaptability and adapting to the rapidly evolving AI/ML landscape. • Strong collaboration and interpersonal skills. • Ability to translate market needs into technological solutions. • Strong understanding of industry dynamics and ability to translate market needs into technological solutions. • Demonstrated ability to foster a culture of innovation and creative problem-solving. Candidate Profile and Competencies Lead and manage the AI/ML Center of Excellence (CoE), setting strategic direction and goals. • !00% hands-on Experience is a must in development and deployment of production-level AI models at enterprise scale. (Build Vs Buy Decision Maker) • Drive innovation in AI/ML applications across various business domains and modalities (vision, language, audio). • Hire, train, and manage a team of AI experts to run the CoE effectively. • Collaborate with sales teams to identify opportunities and drive revenue growth through AI/ML solutions. • Develop and deliver training programs to upskill teams across the organization in AI/ML technologies. • Oversee the Implement and maintain best practices in MLOps, DevOps, and CI/CD for AI/ML projects. • Continuously identify and evaluate emerging tools and technologies in the AI/ML space for potential adoption. • Provide thought leadership through whitepapers, conference presentations, and industry engagements to position the organization as an AI/ML leader Experience 8+ yrs
Posted 3 weeks ago
15.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
HCLTech is hiring Data and AI Principal /Senior Manager (Generative AI) for Noida. HCLTech is a global technology company, home to more than 218,000 people across 59 countries, delivering industry-leading capabilities centered around digital, engineering, cloud and AI, powered by a broad portfolio of technology services and products. We work with clients across all major verticals, providing industry solutions for Financial Services, Manufacturing, Life Sciences and Healthcare, Technology and Services, Telecom and Media, Retail and CPG, and Public Services. Consolidated revenues as of 12 months ending September 2024 totaled $13.7 billion. To learn how we can supercharge progress for you, visit hcltech.com. Key Responsibilities: Hands-on Technical Leadership & Oversight: Architecting Scalable Systems : Lead the design of AI , GenAI solutions , machine learning pipelines , and data architectures that ensure performance , scalability , and resilience . Hands-on Development : Actively contribute to coding , code reviews , solution design , and hands-on troubleshooting for critical components of GenAI , ML , and data pipelines . Cross-Functional Collaboration : Work with Account Teams , Client Partners and Domain SMEs to ensure alignment between technical solutions and business needs. Team Leadership : Mentor and guide engineers across various functions including AI, GenAI , Full Stack, Data Pipelines , DevOps , and Machine Learning , fostering a collaborative and high-performance team environment. Solution Design & Architecture: System & API Architecture : Design and implement microservices architectures , RESTful APIs , cloud-based services , and machine learning models that integrate seamlessly into GenAI and data platforms . AI, GenAI, Agentic AI Integration : Lead the integration of AI, GenAI, and Agentic applications , NLP models , and large language models ( e.g., GPT , BERT ) into scalable production systems. Data Pipelines : Architect ETL pipelines , data lakes , and data warehouses using industry-leading tools like Apache Spark , Airflow , and Google BigQuery . Cloud Infrastructure : Drive the deployment and scaling of solutions using cloud platforms like AWS , Azure , GCP , and other relevant cloud-native technologies. Machine Learning & AI Solutions: ML Integration : Lead the design and deployment of machine learning models using frameworks like PyTorch , TensorFlow , scikit-learn , and spaCy into end-to-end production workflows, including building of SLMs. Prompt Engineering : Develop and optimize prompt engineering techniques for GenAI models to ensure accurate, relevant, and reliable output. Model Monitoring : Implement best practices for ML model performance monitoring , continuous training, and model versioning in production environments. DevOps & Cloud Infrastructure: CI/CD Pipeline Leadership : Have good working knowledge of CI/CD pipelines , leveraging tools like Jenkins , GitLab CI , Terraform , and Ansible for automating the build, test, and deployment processes. Infrastructure Automation : Lead efforts in Infrastructure-as-Code and ensure automated provisioning of infrastructure through tools like Terraform , CloudFormation , Docker , and Kubernetes . Cloud Management : Ensure robust integration with cloud platforms such as AWS , Azure , GCP , and experience with specific services such as AWS Lambda , Azure ML , Google BigQuery , and others. Cross-Team Collaboration: Stakeholder Communication : Act as the key technical liaison between engineering teams and non-technical stakeholders, ensuring technical solutions meet business and user requirements. Agile Development : Promote Agile methodologies and do solution and code design reviews to deliver milestones efficiently while ensuring high-quality code. Performance Optimization & Scalability: Optimization : Lead performance tuning and optimization for high-traffic applications, especially around machine learning models , data storage , ETL processes , and API latency . Scaling : Ensure solutions scale seamlessly with growth, leveraging cloud-native tools and load balancing strategies such as AWS Auto Scaling , Azure Load Balancer , Kubernetes Horizontal Pod Autoscaler . Required Qualifications: 15+ years of hands-on technical experience in software engineering, with at least 5+ years in a leadership role managing cross-functional teams, including AI, GenAI , machine learning , data engineering , and cloud infrastructure . Hands-on Experience in designing and developing large-scale systems , including AI, GenAI , Agentic AI, API architectures , data systems , ML pipelines , and cloud-native applications . Strong experience with cloud platforms such as AWS , GCP , Azure with a focus on cloud services related to ML , AI , and data engineering . Programming Languages : Proficiency in Python , Flask/Django/FastAPI Experience with API development (RESTful APIs, GraphQL ). Machine Learning & AI : Extensive experience in building and deploying ML models using TensorFlow , PyTorch , scikit-learn , and spaCy , with hands-on experience in integrating them into AI, GenAI and Agentic frameworks like LangChain and MCP . Data Engineering : Familiarity with ETL pipelines , data lakes , data warehouses (e.g., AWS Redshift , Google BigQuery , PostgreSQL ), and data processing tools like Apache Spark , Airflow , and Kafka . DevOps & Automation : Strong expertise in CI/CD pipelines, containerization ( Docker , Kubernetes ), Infrastructure-as-Code ( Terraform , CloudFormation , Ansible ). Experience with API security , OAuth , and rate limiting for high-traffic, secure systems. Tools & Technologies: Cloud Platforms : AWS, GCP, Azure, Google Cloud AI, AWS SageMaker, Azure Machine Learning. Data Engineering : A pache Kafka, Apache Spark, Airflow, Presto, Hadoop, Google BigQuery, AWS Redshift. Machine Learning : TensorFlow, PyTorch, scikit-learn, spaCy, HuggingFace, OpenAI GPT. CI/CD & DevOps : GitLab CI, Jenkins, Docker, Kubernetes, Terraform, Ansible, Helm. API Frameworks : FastAPI, Flask, GraphQL, RESTful APIs . Version Control : Git, GitHub, GitLab. Interested candidates, kindly share your profile on paridhnya_dhawankar@hcltech.com with below details. Overall Experience: Skills: Current and Preferred Location: Current and Expected CTC: Notice Period:
Posted 3 weeks ago
3.0 years
0 Lacs
Jaipur, Rajasthan, India
Remote
Tiger Analytics is a global AI and analytics consulting firm. With data and technology at the core of our solutions, our 4000+ tribe is solving problems that eventually impact the lives of millions globally. Our culture is modeled around expertise and respect with a team first mindset. Headquartered in Silicon Valley, you’ll find our delivery centers across the globe and offices in multiple cities across India, the US, UK, Canada, and Singapore, including a substantial remote global workforce. We’re Great Place to Work-Certified™. Working at Tiger Analytics, you’ll be at the heart of an AI revolution. You’ll work with teams that push the boundaries of what is possible and build solutions that energize and inspire. Work Location: The base location is Delhi/NCR; however, you will be required to work regularly in Jaipur during the initial period. About the role: This pivotal role focuses on the end-to-end development, implementation, and ongoing monitoring of both application and behavioral scorecards within our dynamic retail banking division. While application scorecard development will be the primary area of focus and expertise required, you have a scope of contributing to behavioral scorecard initiatives. The primary emphasis will be on our unsecured lending portfolio, including personal loans, overdrafts, and particularly credit cards. You will be instrumental in enhancing credit risk management capabilities, optimizing lending decisions, and driving profitable growth by leveraging advanced analytical techniques and robust statistical models. This role requires a deep understanding of the credit lifecycle, regulatory requirements, and the ability to translate complex data insights into actionable business strategies within the Indian banking context. Key Responsibilities: End-to-End Scorecard Development (Application & Behavioral): Lead the design, development, and validation of new application scorecards and behavioral scorecards from scratch, specifically tailored for the Indian retail banking landscape and unsecured portfolios (personal loans, credit cards) across ETB and NTB Segments. Should have prior experience in this area. Utilize advanced statistical methodologies and machine learning techniques, leveraging Python for data manipulation, model building, and validation. Ensure robust model validation, back-testing, stress testing, and scenario analysis to ascertain model robustness, stability, and predictive power, adhering to RBI guidelines and internal governance. Cloud-Native Model Deployment & MLOps: Drive the deployment of developed scorecards into production environments on AWS, collaborating with engineering teams to integrate models into credit origination and decisioning systems. Implement and manage MLOps practices for continuous model monitoring, re-training, and version control within the AWS ecosystem. Data Strategy & Feature Engineering: Proactively identify, source, and analyze diverse datasets (e.g., internal bank data, credit bureau data like CIBIL, Experian, Equifax) to derive highly predictive features for scorecard development. Should have prior experience in this area. Address data quality challenges, ensuring data integrity and suitability for model inputs in an Indian banking context. Performance Monitoring & Optimization: Establish and maintain comprehensive model performance monitoring frameworks, including monthly/quarterly tracking of key performance indicators (KPIs) like Gini coefficient, KS statistic, and portfolio vintage analysis. Identify triggers for model recalibration or redevelopment based on performance degradation, regulatory changes, or evolving market dynamics. Required Qualifications, Capabilities and Skills: Experience: 3-10 years of hands-on experience in credit risk model development, with a strong focus on application scorecard development and significant exposure to behavioral scorecards, preferably within the Indian banking sector applying concepts including roll-rate analysis, swapset analysis, reject inferencing. Demonstrated prior experience in model development and deployment in AWS environments, understanding cloud-native MLOps principles. Proven track record in building and validating statistical models (e.g., logistic regression, GBDT, random forests) for credit risk. Education: Bachelor's or Master's degree in a quantitative discipline such as Mathematics, Statistics, Physics, Computer Science, Financial Engineering, or a related field Technical Skills: Exceptional hands-on expertise in Python (Pandas, NumPy, Scikit-learn, SciPy) for data manipulation, statistical modeling, and machine learning. Proficiency in SQL for data extraction and manipulation. Familiarity with AWS services relevant to data science and machine learning (e.g., S3, EC2, SageMaker, Lambda). Knowledge of SAS is a plus, but Python is the primary requirement. Analytical & Soft Skills: Deep understanding of the end-to-end lifecycle of application and behavioral scorecard development, from data sourcing to deployment and monitoring. Strong understanding of credit risk principles, the credit lifecycle, and regulatory frameworks pertinent to Indian banking (e.g., RBI guidelines on credit risk management, model risk management). Excellent analytical, problem-solving, and critical thinking skills. Ability to communicate complex technical concepts effectively to both technical and non-technical stakeholders.
Posted 3 weeks ago
3.0 - 5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
AceVector Group Overview: AceVector Group focuses on tech-enabled retail. It brings together distribution channels, SaaS platforms & consumer brands. Snapdeal (marketplace): Leading value e-commerce marketplace focused on fashion, home, beauty and personal care products Unicommerce (SaaS): Integrated SaaS platform for post-purchase experience management Stellaro Brands (House of Brands): Leading value brands crafted for the needs of modern Indian shoppers Shipway (logistics aggregator): E-commerce shipping solution for your business with our All-in-One shipping aggregator platform Job Overview: We are looking for a Senior Data Scientist — someone who’s ready to put ML models into action, not just experimentation. ✨ What you'll work on: ✔ Build & deploy ML models across pricing, RTO prediction, recommendations, and more ✔ Work on real-time production systems serving millions of users ✔ Collaborate closely with business, product, and engineering teams ✨ What we’re looking for: 🔸 3-5 years of end-to-end ML model development & deployment experience 🔸 Strong in Python, SQL, XGBoost, LightGBM, Random Forest, Neural Networks, and Deep Learning frameworks 🔸 Hands-on experience with real-time deployments on Cloud ML Platforms (AWS SageMaker, Databricks) 🔸 Exposure to LLMs and PySpark is a plus! Ready to make an impact? Let’s talk.
Posted 3 weeks ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
Remote
Company Description WNS (Holdings) Limited (NYSE: WNS), is a leading Business Process Management (BPM) company. We combine our deep industry knowledge with technology and analytics expertise to co-create innovative, digital-led transformational solutions with clients across 10 industries. We enable businesses in Travel, Insurance, Banking and Financial Services, Manufacturing, Retail and Consumer Packaged Goods, Shipping and Logistics, Healthcare, and Utilities to re-imagine their digital future and transform their outcomes with operational excellence.We deliver an entire spectrum of BPM services in finance and accounting, procurement, customer interaction services and human resources leveraging collaborative models that are tailored to address the unique business challenges of each client. We co-create and execute the future vision of 400+ clients with the help of our 44,000+ employees. Job Description Min Exp - 5-8 years Location - Remote Shift timings - 6 pm to 3 am (Night Shift) Exp with Data Drift is important Engagement & Project Overview An AI model trainer brings specialised knowledge in developing and fine-tuning machine learning models. They can ensure that your models are accurate, efficient, and tailored to your specific needs. Hiring an AI model trainer and tester can significantly enhance our data management and analytics capabilities Job Description Expertise in Model Development: Develop and fine-tune machine learning models. Ensure models are accurate, efficient, and tailored to our specific needs. Quality Assurance: Rigorously evaluate models to identify and rectify errors. Maintain the integrity of our data-driven decisions through high performance and reliability. Efficiency and Scalability: Streamline processes to reduce time-to-market. Scale AI initiatives and ML engineering skills effectively with dedicated model training and testing. Production ML Monitoring & MLOps: Implement and maintain model monitoring pipelines to detect data drift, concept drift, and model performance degradation. Set up alerting and logging systems using tools such as Evidently AI, WhyLabs/Prometheus + Grafana or cloud-native solutions (AWS SageMaker Monitor, GCP Vertex AI, Azure Monitor). Collaborate with teams to integrate monitoring into CI/CD pipelines, using platforms like Kubeflow, MLflow, Airflow, and Neptune.ai. Define and manage automated retraining triggers and model versioning strategies. Ensure observability and traceability across the ML lifecycle in production environments. Qualifications Qualifications: 5+ years of experience in the respective field. Proven experience in developing and fine-tuning machine learning models. Strong background in quality assurance and model testing. Ability to streamline processes and scale AI initiatives. Innovative mindset with a keen understanding of industry trends. License/Certification/Registration
Posted 3 weeks ago
15.0 years
0 Lacs
India
On-site
About Netskope Today, there's more data and users outside the enterprise than inside, causing the network perimeter as we know it to dissolve. We realized a new perimeter was needed, one that is built in the cloud and follows and protects data wherever it goes, so we started Netskope to redefine Cloud, Network and Data Security. Since 2012, we have built the market-leading cloud security company and an award-winning culture powered by hundreds of employees spread across offices in Santa Clara, St. Louis, Bangalore, London, Paris, Melbourne, Taipei, and Tokyo. Our core values are openness, honesty, and transparency, and we purposely developed our open desk layouts and large meeting spaces to support and promote partnerships, collaboration, and teamwork. From catered lunches and office celebrations to employee recognition events and social professional groups such as the Awesome Women of Netskope (AWON), we strive to keep work fun, supportive and interactive. Visit us at Netskope Careers. Please follow us on LinkedIn and Twitter@Netskope. About the team: We are a distributed team of passionate engineers dedicated to continuous learning and building impactful products. Our core product is built from the ground up in Scala, leveraging the Lightbend stack (Play/Akka). About Data Security Posture Management (DSPM) DSPM is designed to provide comprehensive data visibility and contextual security for the modern AI-driven enterprise. Our platform automatically discovers, classifies, and secures sensitive data across diverse environments including AWS, Azure, Google Cloud, Oracle Cloud , and on-premise infrastructure. DSPM is critical in empowering CISOs and security teams to enforce secure AI practices, reduce compliance risk, and maintain continuous data protection at scale. As a key member of the DSPM team, you will contribute to developing innovative and scalable systems designed to protect the exponentially increasing volume of enterprise data. Our platform continuously maps sensitive data across all major cloud providers, and on-prem environments, automatically detecting and classifying sensitive and regulated data types such as PII, PHI, financial, and healthcare information. It flags data at risk of exposure, exfiltration, or misuse and helps prioritize issues based on sensitivity, exposure, and business impact. What you'll do: Drive the enhancement of Data Security Posture Management (DSPM) capabilities, by enabling the detection of sensitive or risky data utilized in (but not limited to) training private LLMs or accessed by public LLMs. Improve the DSPM platform to extend support of the product to all major cloud infrastructures, on-prem deployments, and any new upcoming technologies. Provide technical leadership in all phases of a project, from discovery and planning through implementation and delivery. Contribute to product development: understand customer requirements and work with the product team to design, plan, and implement features. Support customers by investigating and fixing production issues. Help us improve our processes and make the right tradeoffs between agility, cost, and reliability. Collaborate with the teams across geographies. What you bring to the table: 15+ years of software development experience with enterprise-grade software. Must have experience in building scalable, high-performance cloud services. Expert coding skills in Scala or Java. Development on cloud platforms including AWS. Deep knowledge on databases and data warehouses (OLTP, OLAP) Analytical and troubleshooting skills. Experience working with Docker and Kubernetes. Able to multitask and wear many hats in a fast-paced environment. This week you might lead the design of a new feature, next week you are fixing a critical bug or improving our CI infrastructure. Cybersecurity experience and adversarial thinking is a plus. Expertise in building REST APIs. Experience leveraging cloud-based AI services (e.g., AWS Bedrock, SageMaker), vector databases, and Retrieval Augmented Generation (RAG) architectures is a plus. Education: Bachelor's degree in Computer Science. Masters degree strongly preferred. Netskope is committed to implementing equal employment opportunities for all employees and applicants for employment. Netskope does not discriminate in employment opportunities or practices based on religion, race, color, sex, marital or veteran statues, age, national origin, ancestry, physical or mental disability, medical condition, sexual orientation, gender identity/expression, genetic information, pregnancy (including childbirth, lactation and related medical conditions), or any other characteristic protected by the laws or regulations of any jurisdiction in which we operate. Netskope respects your privacy and is committed to protecting the personal information you share with us, please refer to Netskope's Privacy Policy for more details.
Posted 3 weeks ago
3.0 - 8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Role: AI Engineer Experience: 3-8 years Notice Period: Immediate to 15 days Location: Hyderabad We are seeking a highly skilled AI/ML Engineer to join our dynamic team. Job Description Skills, Experience & Qualifications Required Experience in building and deploying applications using AWS cloud technologies as an AI/ML engineer or in at least one of the following areas: Data engineering, Cloud engineering, backend or full-stack engineering, MLOps, DevOps. Data Science, AI/ML pipelines on Amazon Sagemaker, Bedrock or other cloud environments Experienced designing APIs & microservices, integration among AWS databases, backend services Skills in automation and virtualisation related to model deployment and scaling, Infra as Code with Terraform Familiarity with different microservice deployment methodologies, including A/B testing, blue-green and canary deployments Proficient in Python or other programming languages used for developing Data Science, AI/ML models A commitment to ongoing learning & development Desirable: Experience building AI/ML-driven customer-facing products in a fast-moving B2C e-commerce or marketplace environment Experience working with real-time data streams and Data Lake Relevant knowledge in one or a few areas of Generative AI, LLMs, Machine Learning, Data science, algorithms, statistics, optimisations, hypothesis testing Regards, ValueLabs
Posted 3 weeks ago
2.0 - 4.0 years
18 - 30 Lacs
Bengaluru
Work from Office
Roles and Responsibilities Design, develop, and deploy data science models using Python, AWS, and LLM (Large Language Model) technologies. Collaborate with cross-functional teams to identify business problems and design solutions that leverage gen AI capabilities. Develop scalable data pipelines using Bedrock framework to integrate various data sources into SageMaker model development. Conduct exploratory data analysis, feature engineering, and model evaluation to ensure high accuracy and reliability of results. Provide technical guidance on best practices for implementing machine learning algorithms in production environments. Analyze large, complex healthcare datasets, including electronic health records (EHR) and claims data. Develop statistical models for patient risk stratification, treatment optimization, population health management, and revenue cycle optimization. Build models for clinical decision support, patient outcome prediction, care quality improvement, and revenue cycle optimization Create and maintain automated data pipelines for real-time analytics and reporting Work with healthcare data standards (HL7 FHIR, ICD-10, CPT, SNOMED CT) and ensure regulatory compliance. Develop and deploy models in cloud environments while creating visualizations for stakeholders Present findings and recommendations to cross-functional teams including clinicians, product managers, and executives
Posted 3 weeks ago
10.0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
About KnowBe4 KnowBe4, the provider of the world's largest security awareness training and simulated phishing platform, is used by tens of thousands of organizations around the globe. KnowBe4 enables organizations to manage the ongoing problem of social engineering by helping them train employees to make smarter security decisions, every day. Fortune has ranked us as a best place to work for women, for millennials, and in technology for four years in a row! We have been certified as a "Great Place To Work" in 8 countries, plus we've earned numerous other prestigious awards, including Glassdoor's Best Places To Work. Our team values radical transparency, extreme ownership, and continuous professional development in a welcoming workplace that encourages all employees to be themselves. Whether working remotely or in-person, we strive to make every day fun and engaging; from team lunches to trivia competitions to local outings, there is always something exciting happening at KnowBe4. Please submit your resume in English. Are you a forward-thinking data scientist, poised to lead with innovation? At KnowBe4, you'll be able to shape a career as distinctive as your expertise, supported by our global reach, inclusive ethos, and cutting-edge technology. As a Data Scientist, you'll be at the forefront of crafting impactful, data-driven solutions, collaborating with talented teams in a dynamic, fast-paced environment. Join us in creating an extraordinary path for your professional growth and making a meaningful impact in the working world. Data scientists design data modeling processes, create algorithms and predictive models to be used by software engineers for developing new and exciting products for KnowBe4’s customers, alongside other engineers in a fast-paced, agile development environment. Responsibilities Research, design, and implement Machine Learning, Deep Learning algorithms to solve complex problems Communicate complex concepts and statistical models to non-technical audiences through data visualizationsPerforms statistical analysis and using results to improve models Identify opportunities and formulate data science / machine learning projects to optimize business impact Serve as a subject matter expert in data science and analytics research, and adopt the new tooling and methodologies in Knowbe4 Manage the release, maintenance, and enhancement of machine learning solutions in a production environment via multiple deployment options such as APIs, embedded software, or stand-alone applications Advise various teams on Machine Learn Practices and ensure the highest quality and compliance standards for ML deployments Design and develop cyber security awareness products and features using Generative AI, machine learning, deep learning, and other data ecosystem technologies. Collaborate with cross-functional teams to identify data-related requirements, design appropriate NLP experiments, and conduct in-depth analyses to derive actionable insights from unstructured data sources. Staying updated with the latest advancements in machine learning, deep learning, and generative AI through self-learning and professional development. Research, design, and implement Machine Learning, Deep Learning algorithms to solve complex problems Communicate complex concepts and statistical models to non-technical audiences through data visualizationsPerforms statistical analysis and using results to improve models Identify opportunities and formulate data science / machine learning projects to optimize business impact Serve as a subject matter expert in data science and analytics research, and adopt the new tooling and methodologies in Knowbe4 Manage the release, maintenance, and enhancement of machine learning solutions in a production environment via multiple deployment options such as APIs, embedded software, or stand-alone applications Advise various teams on Machine Learn Practices and ensure the highest quality and compliance standards for ML deployments Requirements BS or equivalent plus 10 years experience MS or equivalent plus 5 years experience Ph.D. or equivalent plus 4 years experience Expertise working experience with programming languages like Python, R, and SQL Solid understanding of statistics, probability, and machine learning 10+ years of relevant experience in designing ML/DL/GenAI systems Expertise in rolling out Generative AI SAAS product and features. Expertise in AWS ecosystem. Proficiency in machine learning algorithms and techniques, including supervised and unsupervised learning, classification, regression, clustering, and dimensionality reduction. Strong understanding and practical experience with deep learning frameworks such as TensorFlow or PyTorch. Ability to design, train, and optimize deep neural networks for various tasks like image recognition, natural language processing, and recommendation systems. Knowledge and experience in generative models like Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs). Ability to create and use generative models for tasks such as image generation, text generation, and data synthesis. Exposure to LLMs, Transformers, and a few technologies like Langchain, Llamaindex, Pinecone, Sagemaker Jumpstart, Chatgpt, AWS Bedrock, and VertexAI. Strong data manipulation skills, including data cleaning, preprocessing, and feature engineering. Experience with data manipulation libraries like Pandas. Ability to create compelling data visualizations using tools like Matplotlib or Seaborn to communicate insights effectively. Proficiency in NLP techniques for text analysis, sentiment analysis, entity recognition, and topic modeling. Strong understanding of data classification, sensitivity, PII, and personal data modeling techniques Experience in model evaluation and validation techniques, including cross-validation, hyperparameter tuning, and performance metrics selection. Proficiency in version control systems like Git for tracking and managing code changes. Strong communication skills to convey complex findings and insights to both technical and non-technical stakeholders. Ability to work collaboratively in cross-functional teams. Excellent problem-solving skills to identify business challenges and devise data-driven solutions. Nice To Have Experience in designing data pipelines and products for real-world applications Experience with modern/emerging scalable computing platforms and languages (e.g. Spark) Familiarity with big data technologies like Hadoop, Spark, and distributed computing frameworks for handling large datasets. Our Fantastic Benefits We offer company-wide bonuses based on monthly sales targets, employee referral bonuses, adoption assistance, tuition reimbursement, certification reimbursement, certification completion bonuses, and a relaxed dress code - all in a modern, high-tech, and fun work environment. For more details about our benefits in each office location, please visit www.knowbe4.com/careers/benefits. Note: An applicant assessment and background check may be part of your hiring procedure. Individuals seeking employment at KnowBe4 are considered without prejudice to race, color, religion, national origin, age, sex, marital status, ancestry, physical or mental disability, veteran status, gender identity, sexual orientation or any other characteristic protected under applicable federal, state, or local law. If you require reasonable accommodation in completing this application, interviewing, completing any pre-employment testing, or otherwise participating in the employee selection process, please visit www.knowbe4.com/careers/request-accommodation. No recruitment agencies, please.
Posted 3 weeks ago
5.0 - 8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description: Data Scientist Job Summary We are seeking an innovative Data Scientist with 5-8 years of professional experience to join our SmartFM product team. This role will be pivotal in extracting actionable insights from complex operational data, leveraging advanced machine learning, deep learning, agentic workflows, and Large Language Models (LLMs) to optimize building operations. The ideal candidate will transform raw alarms and notifications into predictive models and intelligent recommendations that enhance facility efficiency and decision-making. Roles And Responsibilities Analyze large, complex datasets from various building devices (alarms, notifications, sensor data) to identify patterns, anomalies, and opportunities for operational optimization. Design, develop, and deploy machine learning and deep learning models to predict equipment failures, optimize energy consumption, and identify unusual operational behavior. Develop and implement agentic workflows to automate decision-making processes and trigger intelligent actions based on real-time data insights. Explore and integrate Large Language Models (LLMs) to interpret unstructured data (e.g., maintenance logs, technician notes) and generate natural language insights or automate reporting. Collaborate with Data Engineers to define data requirements, ensure data quality, and optimize data pipelines for machine learning applications. Work closely with Software Engineers to integrate developed models and intelligent agents into the React frontend and Node.js backend of the SmartFM platform. Evaluate and monitor the performance of deployed models, implementing strategies for continuous improvement and retraining. Communicate complex analytical findings and model insights clearly and concisely to technical and non-technical stakeholders. Stay abreast of the latest advancements in AI, ML, Deep Learning, Agentic AI, and LLMs, assessing their applicability to facility management challenges and advocating for their adoption. Contribute to the strategic roadmap for AI/ML capabilities within the SmartFM product. Required Technical Skills And Experience 5-8 years of professional experience in Data Science, Machine Learning Engineering, or a related analytical role. Strong proficiency in Python and its data science ecosystem (Pandas, NumPy, Scikit-learn, TensorFlow, Keras, PyTorch). Proven experience in developing and deploying Machine Learning models for predictive analytics, anomaly detection, and classification problems. Hands-on experience with Deep Learning frameworks and architectures for time series analysis, pattern recognition, or natural language processing. Demonstrated experience in designing and implementing Agentic Workflows or intelligent automation solutions. Practical experience working with Large Language Models (LLMs), including fine-tuning, prompt engineering, or integrating LLM APIs for specific use cases. Solid understanding of statistical modeling, experimental design, and A/B testing. Experience with querying and analyzing data from MongoDB and working with streaming data from Kafka. Familiarity with data ingestion processes, ideally involving IBM StreamSets. Experience with cloud-based ML platforms and services (e.g., AWS SageMaker, Azure ML, Google AI Platform). Additional Qualifications Proven expertise in written and verbal communication, adept at simplifying complex technical concepts for both technical and non-technical audiences. Strong problem-solving and analytical skills with a passion for extracting insights from data. Experienced in collaborating and communicating seamlessly with diverse technology roles, including data engineering, software development, and product management. Highly motivated to acquire new skills, explore emerging technologies, and stay updated on the latest trends in AI/ML and business needs. Domain knowledge in facility management, IoT, or building automation is a significant advantage. Education Requirements / Experience Master’s or Ph.D. degree in Computer Science, Data Science, Artificial Intelligence, Statistics, Mathematics, or a related quantitative field.
Posted 3 weeks ago
0.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Principal Consultant -AWS AI/ML Engineer Responsibilities Design, develop, and deploy scalable AI/ML solutions using AWS services such as Amazon Bedrock, SageMaker, Amazon Q, Amazon Lex, Amazon Connect, and Lambda. Implement and optimize large language model (LLM) applications using Amazon Bedrock, including prompt engineering, fine-tuning, and orchestration for specific business use cases. Build and maintain end-to-end machine learning pipelines using SageMaker for model training, tuning, deployment, and monitoring. Integrate conversational AI and virtual assistants using Amazon Lex and Amazon Connect, with seamless user experiences and real-time inference. Leverage AWS Lambda for event-driven execution of model inference, data preprocessing, and microservices. Design and maintain scalable and secure data pipelines and AI workflows, ensuring efficient data flow to and from Redshift and other AWS data stores. Implement data ingestion, transformation, and model inference for structured and unstructured data using Python and AWS SDKs. Collaborate with data engineers and scientists to support development and deployment of ML models on AWS. Monitor AI/ML applications in production, ensuring optimal performance, low latency, and cost efficiency across all AI/ML services. Ensure implementation of AWS security best practices, including IAM policies, data encryption, and compliance with industry standards. Drive the integration of Amazon Q for enterprise AI-based assistance and automation across internal processes and systems. Participate in architecture reviews and recommend best-fit AWS AI/ML services for evolving business needs. Stay up to date with the latest advancements in AWS AI services, LLMs, and industry trends to inform technology strategy and innovation. Prepare documentation for ML pipelines, model performance reports, and system architecture. Qualifications we seek in you: Minimum Qualifications Proven hands-on experience with Amazon Bedrock, SageMaker, Lex, Connect, Lambda, and Redshift. Strong knowledge and application experience with Large Language Models (LLMs) and prompt engineering techniques. Experience building production-grade AI applications using AWS AI or other generative AI services. Solid programming experience in Python for ML development, data processing, and automation. Proficiency in designing and deploying conversational AI/chatbot solutions using Lex and Connect. Experience with Redshift for data warehousing and analytics integration with ML solutions. Good understanding of AWS architecture, scalability, availability, and security best practices. Familiarity with AWS development, deployment, and monitoring tools (CloudWatch, CodePipeline, etc.). Strong understanding of MLOps practices including model versioning, CI/CD pipelines, and model monitoring. Strong communication and interpersonal skills to collaborate with cross-functional teams and stakeholders. Ability to troubleshoot performance bottlenecks and optimize cloud resources for cost-effectiveness Preferred Qualifications: AWS Certification in Machine Learning, Solutions Architect, or AI Services. Experience with other AI tools (e.g., Anthropic Claude, OpenAI APIs, or Hugging Face). Knowledge of streaming architectures and services like Kafka or Kinesis. Familiarity with Databricks and its integration with AWS services. Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 3 weeks ago
0.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Senior Principal Consultant -AWS AI/ML Engineer Responsibilities Design, develop, and deploy scalable AI/ML solutions using AWS services such as Amazon Bedrock, SageMaker, Amazon Q, Amazon Lex, Amazon Connect, and Lambda. Implement and optimize large language model (LLM) applications using Amazon Bedrock, including prompt engineering, fine-tuning, and orchestration for specific business use cases. Build and maintain end-to-end machine learning pipelines using SageMaker for model training, tuning, deployment, and monitoring. Integrate conversational AI and virtual assistants using Amazon Lex and Amazon Connect, with seamless user experiences and real-time inference. Leverage AWS Lambda for event-driven execution of model inference, data preprocessing, and microservices. Design and maintain scalable and secure data pipelines and AI workflows, ensuring efficient data flow to and from Redshift and other AWS data stores. Implement data ingestion, transformation, and model inference for structured and unstructured data using Python and AWS SDKs. Collaborate with data engineers and scientists to support development and deployment of ML models on AWS. Monitor AI/ML applications in production, ensuring optimal performance, low latency, and cost efficiency across all AI/ML services. Ensure implementation of AWS security best practices, including IAM policies, data encryption, and compliance with industry standards. Drive the integration of Amazon Q for enterprise AI-based assistance and automation across internal processes and systems. Participate in architecture reviews and recommend best-fit AWS AI/ML services for evolving business needs. Stay up to date with the latest advancements in AWS AI services, LLMs, and industry trends to inform technology strategy and innovation. Prepare documentation for ML pipelines, model performance reports, and system architecture. Qualifications we seek in you: Minimum Qualifications Proven hands-on experience with Amazon Bedrock, SageMaker, Lex, Connect, Lambda, and Redshift. Strong knowledge and application experience with Large Language Models (LLMs) and prompt engineering techniques. Experience building production-grade AI applications using AWS AI or other generative AI services. Solid programming experience in Python for ML development, data processing, and automation. Proficiency in designing and deploying conversational AI/chatbot solutions using Lex and Connect. Experience with Redshift for data warehousing and analytics integration with ML solutions. Good understanding of AWS architecture, scalability, availability, and security best practices. Familiarity with AWS development, deployment, and monitoring tools (CloudWatch, CodePipeline , etc.). Strong understanding of MLOps practices including model versioning, CI/CD pipelines, and model monitoring. Strong communication and interpersonal skills to collaborate with cross-functional teams and stakeholders. Ability to troubleshoot performance bottlenecks and optimize cloud resources for cost-effectiveness Preferred Qualifications: AWS Certification in Machine Learning, Solutions Architect, or AI Services. Experience with other AI tools (e.g., Anthropic Claude, OpenAI APIs, or Hugging Face). Knowledge of streaming architectures and services like Kafka or Kinesis. Familiarity with Databricks and its integration with AWS services. Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 3 weeks ago
0.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Lead Consultant, Senio r Data Scientist ! In this role, you will have a strong background in Gen AI implementations, data engineering, developing ETL processes, and utilizing machine learning tools to extract insights and drive business decisions. The Data Scientist will be responsible for analysing large datasets, developing predictive models, and communicating findings to various stakeholders Responsibilities Develop and maintain machine learning models to identify patterns and trends in large datasets. Utilize Gen AI and various LLMs to design & develop production ready use cases. Collaborate with cross-functional teams to identify business problems and develop data-driven solutions. Communicate complex data findings and insights to non-technical stakeholders in a clear and concise manner. Continuously monitor and improve the performance of existing models and processes. Stay up to date with industry trends and advancements in data science and machine learning. Design and implement data models and ETL processes to extract, transform, and load data from various sources. Good hands own experience in AWS bedrock models, Sage maker, Lamda etc Data Exploration & Preparation - Conduct exploratory data analysis and clean large datasets for modeling. Business Strategy & Decision Making - Translate data insights into actionable business strategies. Mentor Junior Data Scientists - Provide guidance and expertise to junior team members. Collaborate with Cross-Functional Teams - Work with engineers, product managers, and stakeholders to align data solutions with business goals. Qualifications we seek in you! Minimum Qualifications Bachelor%27s / Master%27s degree in computer science, Statistics, Mathematics, or a related field. Relevant years of experience in a data science or analytics role. Strong proficiency in SQL and experience with data warehousing and ETL processes. Experience with programming languages such as Python & R is a must. (either one ) Familiarity with machine learning tools and libraries such as Pandas, scikit-learn and AI libraries. Having excellent knowledge in Gen AI, RAG, LLM Models & strong understanding of prompt engineering. Proficiency in Az Open AI & AWS Sagemaker implementation. Good understanding statistical techniques such and advanced machine learning Experience with data warehousing and ETL processes. Proficiency in SQL and database management. Familiarity with cloud-based data platforms such as AWS, Azure, or Google Cloud. Experience with Azure ML Studio is desirable. Knowledge of different machine learning algorithms and their applications. Familiarity with data preprocessing and feature engineering techniques. Preferred Qualifications/ Skills Experience with model evaluation and performance metrics. Understanding of deep learning and neural networks is a plus. Certified in AWS Machine learning , AWS Infra engineer is a plus Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 3 weeks ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About Enqurious Enqurious (formerly Mentorskool) is a fast-growing EdTech company based in Bengaluru that specializes in upskilling industry-ready Data + AI teams. We combine skill-driven precision learning with data-driven skill intelligence to help organizations deploy talent to projects 70% faster. Our mission is to bridge the skills gap in the data + AI space through problem-first, action-oriented, and mentorship-driven learning experiences. Our clientele includes large Enterprise Data and AI consulting firms like Fractal Analytics, Tredence, Tiger Analytics and MathCo. We help our customers achieve upskilling in many different flavors including - Graduate Data Engineer Upskilling, Continuous Learning, Certifications, Hackathons and Project Accelerators. Enqurious is our home-grown Learning Engagement Platform where we put well researched learning experiences integrated with labs to help learners experience real-world problems and challenges leading to upskilling that delivers business outcomes. https://www.enqurious.com/ Position Overview Location: Bengaluru, Karnataka (Hybrid) Experience: 2+ years Employment Type: Full-time Department: AI & ML We are seeking a passionate Data Scientist with 2+ years of experience to join our core team. This role offers a unique opportunity to work on cutting-edge data science projects and contribute to shaping the next generation of data professionals through developing domain first use cases and delivered via mentoring Must-Have Requirements Technical Skills SQL Mastery: Advanced proficiency in SQL with experience in complex query optimization, performance tuning, and working with large datasets and relational data modelling Statistical & ML Foundations: Strong grasp of probability, statistics, hypothesis testing, and ML algorithms (supervised, unsupervised, and time-series). Python Programming: Proficiency with pandas, NumPy, scikit-learn, TensorFlow/PyTorch, and data-validation/automation scripting. Data Visualisation: Ability to craft compelling stories with Matplotlib, Seaborn, Plotly Cloud Fundamentals: Solid understanding of cloud computing principles, Familiarity with ML services on AWS (SageMaker, Athena), or Azure (ML Studio, Databricks). Experience & Mindset - Must have Minimum 2 years of hands-on data science experience with a proven track record of building models and MLOps pipelines Model Development & Deployment: Experience building robust, scalable MLops on cloud platforms, preferably AWS (S3, Sagemaker) or Azure(Adls, Databricks, MLflow, Azure ML Studio) Continuous Learning: Demonstrated ability to quickly adapt to new technologies, frameworks, and methodologies in the rapidly evolving data landscape Teaching/Mentoring Aptitude : Genuine interest and willingness to occasionally conduct corporate training sessions, workshops Ability to work in uncertain and ambiguous environments. We expect you to be a self-starter, and you will be required to demonstrate this skill in the interview Good to Have Startup Mindset: Openness to work closely with core and founding team members in creating RFPs, proposals, and strategic technical documents Client Interaction: Experience interfacing with clients to understand requirements and translate business needs into technical solutions Key Responsibilities Core Engineering (for experience building and creating simulated projects inspired by the real world) Design, build, and maintain reproducible ML pipelines (feature engineering, training, evaluation, and CI/CD deployment). Develop predictive, prescriptive, and generative models that are performant, explainable, and cost-efficient. Implement data-quality checks, bias/variance monitoring, and automated drift detection. Conduct mentoring sessions on data science best practices, tools, and technologies Develop hands-on labs and real-world scenarios for upskilling programs Contribute to Enqurious's knowledge base and learning content library Strategic Contributions Participate in technical discussions with the founding team and contribute to content roadmap decisions Assist in creating technical proposals, RFPs, and solution architectures for enterprise clients for their upskilling needs Stay updated with industry trends and emerging technologies to enhance our training offerings Represent Enqurious at technical conferences, meetups, and community events What We Offer Professional Growth Accelerated Learning Environment: Work alongside industry experts and gain exposure to diverse data science challenges across multiple domains (Retail, CPG, E-Commerce, FinTech, Insurance) Mentoring Opportunities: Develop your communication and leadership skills through corporate training and mentoring Industry Recognition: Opportunity to build your brand in the data science community Certification Support: Access to premium training resources and certification programs (Databricks, AWS, Azure) Work Environment Flexible Work Arrangements: Hybrid working model with collaborative office environment in Bengaluru Innovation Culture: Be part of a team that's disrupting traditional corporate learning methodologies Direct Impact: Your work will directly influence how thousands of data scientists are trained globally Startup Agility: Fast-paced environment with opportunities to wear multiple hats and drive initiatives Compensation & Benefits Competitive salary commensurate with experience and skills Performance-based bonuses and equity opportunities Health insurance and wellness programs Professional development budget for conferences, courses, and certifications Ideal Candidate Profile You're the perfect fit if you: Love building impactful models and sharing knowledge Thrive in ambiguous, fast-paced environments where you can make a significant impact Enjoy collaborating with diverse teams, including educators, business stakeholders, and technical experts We are excited about the intersection of technology and education Have a growth mindset and are energized by continuous learning and teaching Can balance hands-on technical work with strategic thinking and planning Application Process 1-2 Interview rounds. Please note that we are only willing to accept anyone who has a notice period of 30 days or less How to Apply? If you're excited about the opportunity to build cutting-edge data solutions while shaping the future of Data Science + AI education, we'd love to hear from you. Apply with your resume, a brief cover letter explaining your interest in this unique role, and any relevant project portfolios or certifications. Send your updated resume to learn@enqurious.com with the subject line "Application for Data Scientist -
Posted 3 weeks ago
9.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Experience: 9+ Years Role: AWS Solution Architect Location: India Job Summary: We are seeking a highly skilled AWS Solution Architect to design and implement cloud-based solutions that are scalable, secure, and cost-effective. The ideal candidate will have a deep understanding of AWS services and a strong background in cloud architecture, infrastructure as code, DevOps practices, and security best practices. Key Responsibilities: Design, build, and maintain scalable and secure AWS cloud architectures tailored to client or project needs. Collaborate with stakeholders to understand business requirements and translate them into technical solutions. Should have experience in Agentic AI and Python. Provide technical leadership in cloud strategy, solution development, and deployment. Design for high availability, disaster recovery, and cost optimization. Define and implement governance policies and best practices across AWS environments. Work closely with development, operations, and security teams to ensure seamless integration and performance. Evaluate and recommend tools, technologies, and processes to ensure the highest quality product platform. Lead proof-of-concepts and technical validation for new projects and services. Required Skills & Qualifications: Bachelor's degree in Computer Science, Engineering, or a related field (Master's preferred). 9+ years of experience in IT, with 4+ years in cloud architecture (specifically AWS). AWS Certified Solutions Architect – Associate or Professional (preferred). Strong expertise in core AWS services: EC2, S3, VPC, IAM, RDS, Lambda, API Gateway, CloudFormation, etc. Proficiency in infrastructure-as-code tools like Terraform, AWS CDK, or CloudFormation. Experience with CI/CD tools such as Jenkins, GitLab CI, AWS CodePipeline. Knowledge of security best practices (encryption, IAM policies, network security). Familiarity with containers (Docker, ECS, EKS) and serverless architecture. Excellent communication, presentation, and client-facing skills. Preferred Qualifications (Nice to Have): Hands-on experience with AI/ML workloads on AWS (e.g., SageMaker, Bedrock). Experience with multi-cloud or hybrid-cloud strategies. Familiarity with agent-based AI frameworks , event-driven architecture , or GraphQL . Experience integrating monitoring tools like CloudWatch, Datadog, or New Relic. Background in microservices architecture and API development.
Posted 3 weeks ago
50.0 years
6 - 9 Lacs
Gurgaon
On-site
About the Opportunity Job Type: Permanent Application Deadline: 31 July 2025 Title Senior Analyst - Data Science Department Enterprise Data & Analytics Location Gurgaon Reports To Gaurav Shekhar Level Data Scientist 4 We’re proud to have been helping our clients build better financial futures for over 50 years. How have we achieved this? By working together - and supporting each other - all over the world. So, join our team and feel like you’re part of something bigger. About your team Join the Enterprise Data & Analytics team — collaborating across Fidelity’s global functions to empower the business with data-driven insights that unlock business opportunities, enhance client experiences, and drive strategic decision-making. About your role As a key contributor within the Enterprise Data & Analytics team, you will lead the development of machine learning and data science solutions for Fidelity Canada. This role is designed to turn advanced analytics into real-world impact—driving growth, enhancing client experiences, and informing high-stakes decisions. You’ll design, build, and deploy ML models on cloud and on-prem platforms, leveraging tools like AWS SageMaker, Snowflake, Adobe, Salesforce etc. Collaborating closely with business stakeholders, data engineers, and technology teams, you’ll translate complex challenges into scalable AI solutions. You’ll also champion the adoption of cloud-based analytics, contribute to MLOps best practices, and support the team through mentorship and knowledge sharing. This is a high-impact role for a hands-on problem solver who thrives on ownership, innovation, and seeing their work directly influence strategic outcomes. About you You have 4–7 years of experience working in data science domain, with a strong track record of delivering advanced machine learning solutions for business. You’re skilled in developing models for classification, forecasting, recommender systems and hands-on with frameworks like Scikit-learn, TensorFlow, or PyTorch. You bring deep expertise in developing and deploying models on AWS SageMaker, strong business problem-solving abilities, and are familiar with emerging GenAI trends. A background in engineering, mathematics, or economics from a Tier 1 institution will be preferred. Feel rewarded For starters, we’ll offer you a comprehensive benefits package. We’ll value your wellbeing and support your development. And we’ll be as flexible as we can about where and when you work – finding a balance that works for all of us. It’s all part of our commitment to making you feel motivated by the work you do and happy to be part of our team. For more about our work, our approach to dynamic working and how you could build your future here, visit careers.fidelityinternational.com. For more about our work, our approach to dynamic working and how you could build your future here, visit careers.fidelityinternational.com.
Posted 3 weeks ago
6.0 years
7 - 10 Lacs
Bengaluru
On-site
Requisition ID: 7579 Bangalore, India Enphase Energy is a global energy technology company and leading provider of solar, battery, and electric vehicle charging products. Founded in 2006, Enphase transformed the solar industry with our revolutionary microinverter technology, which turns sunlight into a safe, reliable, resilient, and scalable source of energy to power our lives. Today, the Enphase Energy System helps people make, use, save, and sell their own power. Enphase is also one of the fastest growing and innovative clean energy companies in the world, with approximately 68 million products installed across more than 145 countries. We are building teams that are designing, developing, and manufacturing next-generation energy technologies and our work environment is fast-paced, fun, and full of exciting new projects. If you are passionate about advancing a more sustainable future, this is the perfect time to join Enphase! The Sr. Data Scientist will be responsible for analyzing product performance in the fleet. Provides support for the data management activities of the Quality/Customer Service organization. Collaborates with Engineering/Quality/CS teams and Information Technology. What You Will Do Strong understanding of industrial processes, sensor data, and IoT platforms, essential for building effective predictive maintenance models. Experience translating theoretical concepts into engineered features, with a demonstrated ability to create features capturing important events or transitions within the data. Expertise in crafting custom features that highlight unique patterns specific to the dataset or problem, enhancing model predictive power. Ability to combine and synthesize information from multiple data sources to develop more informative features. Advanced knowledge in Apache Spark (PySpark, SparkSQL, SparkR) and distributed computing, demonstrated through efficient processing and analysis of large-scale datasets. Proficiency in Python, R, and SQL, with a proven track record of writing optimized and efficient Spark code for data processing and model training. Hands-on experience with cloud-based machine learning platforms such as AWS SageMaker and Databricks, showcasing scalable model development and deployment. Demonstrated capability to develop and implement custom statistical algorithms tailored to specific anomaly detection tasks. Proficiency in statistical methods for identifying patterns and trends in large datasets, essential for predictive maintenance. Demonstrated expertise in engineering features to highlight deviations or faults for early detection. Proven leadership in managing predictive maintenance projects from conception to deployment, with a successful track record of cross-functional team collaboration. Experience extracting temporal features, such as trends, seasonality, and lagged values, to improve model accuracy. Skills in filtering, smoothing, and transforming data for noise reduction and effective feature extraction. Experience optimizing code for performance in high-throughput, low-latency environments. Experience deploying models into production, with expertise in monitoring their performance and integrating them with CI/CD pipelines using AWS, Docker, or Kubernetes. Familiarity with end-to-end analytical architectures, including data lakes, data warehouses, and real-time processing systems. Experience creating insightful dashboards and reports using tools such as Power BI, Tableau, or custom visualization frameworks to effectively communicate model results to stakeholders. 6+ years of experience in data science with a significant focus on predictive maintenance and anomaly detection. Who You Are and What you Bring Bachelor’s or Master’s degree/ Diploma in Engineering, Statistics, Mathematics or Computer Science 6+ years of experience as a Data Scientist Strong problem-solving skills Proven ability to work independently and accurately
Posted 3 weeks ago
6.0 years
10 - 15 Lacs
Bengaluru
On-site
Key Responsibilities Strong Python, Flask, REST API, and NoSQL skills. Familiarity with Docker is a plus. AWS Developer Associate certification is required. AWS Professional Certification is preferred. Architect, build, and maintain secure, scalable backend services on AWS platforms. Utilize core AWS services like Lambda, DynamoDB, API Gateways, and serverless technologies. Design and deliver RESTful APIs using Python Flask framework. Leverage NoSQL databases and design efficient data models for large user bases. Integrate with web services APIs and external systems. Apply AWS Sagemaker for machine learning and analytics (optional but preferred). Collaborate effectively with diverse teams (business analysts, data scientists, etc.). Troubleshoot and resolve technical issues within distributed systems. Employ Agile methodologies (JIRA, Git) and adhere to best practices. Continuously learn and adapt to new technologies and industry standards. Qualifications A bachelor’s degree in computer science, information technology or any relevant disciplines is required. A master’s degree is preferred. At least 6 years of development experience, with 5+ years of experience in AWS. Must have demonstrated skills in planning, designing, developing, architecting, and implementing applications. Job Type: Full-time Pay: ₹1,000,000.00 - ₹1,500,000.00 per year Benefits: Paid time off Provident Fund Location Type: In-person Schedule: Day shift Monday to Friday Weekend availability Application Question(s): Can you join within 10-15 days ? Education: Bachelor's (Preferred) Experience: total work: 5 years (Required) Programming: 5 years (Required) Work Location: In person
Posted 3 weeks ago
3.0 years
4 Lacs
Indore
On-site
About the Role: We are looking for a highly skilled and forward-thinking AI/ML Engineer with 3–4 years of practical experience in building and deploying AI-powered solutions for industrial automation, computer vision, and LLM-based applications. The ideal candidate should have experience with the latest AI tools and frameworks including LangChain, LangGraph, Vision Transformers, and MLOps on AWS (SageMaker), as well as expertise in building multi-agent chat applications with React agents and vector-based RAG (Retrieval-Augmented Generation) architectures. Responsibilities: · Design, train, and deploy AI/ML models for industrial automation, including computer vision systems using OpenCV and deep learning frameworks. · Develop multi-agent chat applications integrating LLMs, React-based agents, and contextual memory. · Implement Vision Transformers (ViTs) for advanced visual understanding tasks. · Utilize LangChain, LangGraph, and RAG techniques to create intelligent conversational systems with vector embeddings and document retrieval. · Fine-tune pre-trained LLMs for custom enterprise use cases. · Collaborate with frontend teams to build responsive, intelligent UIs using React + AI backends. · Deploy AI solutions on AWS Cloud, leveraging SageMaker, Lambda, S3, and related MLOps tools for model lifecycle management. · Ensure high performance, reliability, and scalability of deployed AI systems. Required Skills · 3–4 years of hands-on experience in AI/ML engineering, preferably with industrial or automation-focused projects. · Proficiency in Python and frameworks like PyTorch, TensorFlow, Scikit-learn. · Strong understanding of LLMs (GPT, Claude, LLaMA, etc.), prompt engineering, and fine-tuning techniques. · Experience with LangChain, LangGraph, and RAG-based architecture using vector databases like FAISS, Pinecone, or Weaviate. · Expertise in Vision Transformers, YOLO, Detectron2, and computer vision techniques. · Familiarity with multi-agent architectures, React agents, and building intelligent UIs with frontend-backend synergy. · Working knowledge of AWS services (SageMaker, Lambda, EC2, S3) and MLOps workflows (CI/CD for ML). · Experience deploying and maintaining models in production environments. Qualifications: · Experience with edge AI, NVIDIA Jetson, or industrial IoT integration. · Prior involvement in developing AI-powered chatbots or assistants with memory and tool integration. · Exposure to containerization (Docker) and model versioning tools like MLflow or DVC. · Contributions to open-source AI projects or published research in AI/ML Job Type: Full-time Pay: From ₹412,334.30 per year Benefits: Health insurance Paid sick time Provident Fund Schedule: Day shift Supplemental Pay: Performance bonus Ability to commute/relocate: Indore, Madhya Pradesh: Reliably commute or planning to relocate before starting work (Required) Work Location: In person
Posted 3 weeks ago
3.0 - 5.0 years
8 - 9 Lacs
Calcutta
On-site
3 - 5 Years 4 Openings Kolkata, Pune Role description Role Proficiency: Independently interprets data and analyses results using statistical techniques Outcomes: Independently Mine and acquire data from primary and secondary sources and reorganize the data in a format that can be easily read by either a machine or a person; generating insights and helping clients make better decisions. Develop reports and analysis that effectively communicate trends patterns and predictions using relevant data. Utilizes historical data sets and planned changes to business models and forecast business trends Working alongside teams within the business or the management team to establish business needs. Creates visualizations including dashboards flowcharts and graphs to relay business concepts through visuals to colleagues and other relevant stakeholders. Set FAST goals Measures of Outcomes: Schedule adherence to tasks Quality – Errors in data interpretation and Modelling Number of business processes changed due to vital analysis. Number of insights generated for business decisions Number of stakeholder appreciations/escalations Number of customer appreciations No: of mandatory trainings completed Outputs Expected: Data Mining: Acquiring data from various sources Reorganizing/Filtering data: Consider only relevant data from the mined data and convert it into a format which is consistent and analysable. Analysis: Use statistical methods to analyse data and generate useful results. Create Data Models: Use data to create models that depict trends in the customer base and the consumer population as a whole Create Reports: Create reports depicting the trends and behaviours from the analysed data Document: Create documentation for own work as well as perform peer review of documentation of others' work Manage knowledge: Consume and contribute to project related documents share point libraries and client universities Status Reporting: Report status of tasks assigned Comply with project related reporting standards and process Code: Create efficient and reusable code. Follows coding best practices. Code Versioning: Organize and manage the changes and revisions to code. Use a version control tool like git bitbucket etc. Quality: Provide quality assurance of imported data working with quality assurance analyst if necessary. Performance Management: Set FAST Goals and seek feedback from supervisor Skill Examples: Analytical Skills: Ability to work with large amounts of data: facts figures and number crunching. Communication Skills: Ability to present findings or translate the data into an understandable document Critical Thinking: Ability to look at the numbers trends and data; coming up with new conclusions based on the findings. Attention to Detail: Making sure to be vigilant in the analysis to come with accurate conclusions. Quantitative skills - knowledge of statistical methods and data analysis software Presentation Skills - reports and oral presentations to senior colleagues Mathematical skills to estimate numerical data. Work in a team environment Proactively ask for and offer help Knowledge Examples: Knowledge Examples Proficient in mathematics and calculations. Spreadsheet tools such as Microsoft Excel or Google Sheets Advanced knowledge of Tableau or PowerBI SQL Python DBMS Operating Systems and software platforms Knowledge about customer domain and also sub domain where problem is solved Code version control e.g. git bitbucket etc Additional Comments: Statistical Concepts, SQL, Machine Learning (Regression and Classification), Deep Learning (ANN, RNN, CNN), Advanced NLP, Computer Vision, Gen AI/LLM (Prompt Engineering, RAG, Fine Tuning), AWS Sagemaker/Azure ML/Google Vertex AI, Basic implementation experience of Docker, Kubernetes, kubeflow, MLOps, Python (numpy, panda, sklearn, streamlit, matplotlib, seaborn) Skills Data Science,Python,Deep Learning About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.
Posted 3 weeks ago
0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Role Description Role Proficiency: Independently develop data-driven solutions to difficult business challenges by utilize analytical statistical and programming skills to collect analyze and interpret large data sets under supervision. Outcomes Work with stakeholders throughout the organization to identify opportunities for leveraging data from our customers to make models that can generate business insights Create new experimental frameworks or build automated tools to collect data Correlate similar data sets to find actionable results Build predictive models and machine learning algorithms to analyse large amounts of information to discover trends and patterns. Mine and analyse data from company databases to drive optimization and improvement of product development marketing techniques business strategies etc Develop processes and tools to monitor and analyse model performance and data accuracy. Develop Data Visualization and illustrations on given business problem Use predictive modelling to increase and optimize customer experiences and other business outcomes. Coordinate with different functional teams to implement models and monitor outcomes. Set FAST goals and provide feedback on FAST goals of reportees Measures Of Outcomes Number of business processes changed due to vital analysis. Number of Business Intelligent Dashboards developed Number of productivity standards defined for project Number of Prediction and Modelling models used Number of new approaches applied to understand the business trends Quality of data visualization done to help non-technical stakeholders comprehend easily. Number of mandatory trainings completed Outputs Expected Statistical Techniques: Apply statistical techniques like regression properties of distributions statistical tests etc. to analyse data. Machine Learning Techniques Apply machine learning techniques like clustering decision tree learning artificial neural networks etc. to streamline data analysis. Creating Advanced Algorithms Create advanced algorithms and statistics using regression simulation scenario analysis modelling etc. Data Visualization Visualize and present data for stakeholders using: Periscope Business Objects D3 ggplot etc. Management And Strategy Oversees the activities of analyst personnel and ensures the efficient execution of their duties. Critical Business Insights Mines the business’s database in search of critical business insights and communicates findings to the relevant departments. Code Creating efficient and reusable code meant for the improvement manipulation and analysis of data. Version Control Manages project codebase through version control tools e.g. git bitbucket etc. Predictive Analytics Seeks to determine likely outcomes by detecting tendencies in descriptive and diagnostic analysis Prescriptive Analytics Attempts to identify what business action to take Create Reports Creates reports depicting the trends and behaviours from the analysed data Training end users on new reports and dashboards. Document Create documentation for own work as well as perform peer review of documentation of others' work Manage Knowledge Consume and contribute to project related documents share point libraries and client universities Status Reporting Report status of tasks assigned Comply with project related reporting standards and process Skill Examples Excellent pattern recognition and predictive modelling skills Extensive background in data mining and statistical analysis Expertise in machine learning techniques and creating algorithms. Analytical Skills: Ability to work with large amounts of data: facts figures and number crunching. Communication Skills: Communicate effectively with a diverse population at various organization levels with the right level of detail. Critical Thinking: Data Analysts must look at numbers trends and data and come to new conclusions based on the findings. Strong meeting facilitation skills as well as presentation skills. Attention to Detail: Making sure to be vigilant in the analysis to come to correct conclusions. Mathematical Skills to estimate numerical data. Work in a team environment and have strong interpersonal skills to work in collaborative environment Proactively ask for and offer help Knowledge Examples Knowledge Examples Programming languages – Java/ Python/ R. Web Services - Redshift S3 Spark DigitalOcean etc. Statistical and data mining techniques: GLM/Regression Random Forest Boosting Trees text mining social network analysis etc. Google Analytics Site Catalyst Coremetrics Adwords Crimson Hexagon Facebook Insights etc. Computing Tools - Map/Reduce Hadoop Hive Spark Gurobi MySQL etc. Database languages such as SQL NoSQL Analytical tools and languages such as SAS & Mahout. Practical experience with ETL data processing etc. Proficiency in MATLAB. Data visualization software such as Tableau or Qlik. Proficient in mathematics and calculations. Spreadsheet tools such as Microsoft Excel or Google Sheets DBMS Operating Systems and software platforms Knowledge about customer domain and about sub domain where problem is solved Proficient in at least 1 version control tool like git bitbucket Have experience working with project management tool like Jira Additional Comments Must have -Statistical Concepts, SQL, Machine Learning (Regression and Classification), Deep Learning (ANN, RNN, CNN), Advanced NLP, Computer Vision, Gen AI/LLM (Prompt Engineering, RAG, Fine Tuning), AWS Sagemaker/Azure ML/Google Vertex AI, Basic implementation experience of Docker, Kubernetes, kubeflow, MLOps, Python (numpy, panda, sklearn, streamlit, matplotlib, seaborn) Skills Data Management,Data Science,Python
Posted 3 weeks ago
5.0 - 10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. AWS Data Engineer- Senior We are seeking a highly skilled and motivated Hands on AWS Data Engineer with 5-10 years of experience in AWS Glue, Pyspark ,AWS Redshift, S3, and Python to join our dynamic team. As a Data Engineer, you will be responsible for designing, developing, and optimizing data pipelines and solutions that support business intelligence, analytics, and large-scale data processing. You will work closely with data scientists, analysts, and other engineering teams to ensure seamless data flow across our systems. Technical Skills : Must have Strong experience in AWS Data Services like Glue , Lambda, Even bridge, Kinesis, S3/ EMR , Redshift , RDS, Step functions, Airflow & Pyspark Strong exposure to IAM, Cloud Trail , Cluster optimization , Python & SQL Should have expertise in Data design, STTM, understanding of Data models , Data component design, Automated testing, Code Coverage, UAT support , Deployment and go live Experience with version control systems like SVN, Git. Create and manage AWS Glue crawlers and jobs to automate data cataloging and ingestion processes across various structured and unstructured data sources. Strong experience with AWS Glue building ETL pipelines, managing crawlers, and working with Glue data catalogue. Proficiency in AWS Redshift designing and managing Redshift clusters, writing complex SQL queries, and optimizing query performance. Enable data consumption from reporting and analytics business applications using AWS services (ex: QuickSight, Sagemaker, JDBC / ODBC connectivity, etc.) Behavioural skills: Willing to work 5 days a week from ODC / client location ( based on project can be hybrid 3 days a week ) Ability to Lead developers and engage with client stakeholders to drive technical decisions Ability to do technical design and POCs- help build / analyse logical data model, required entities, relationships, data constraints and dependencies focused on enabling reporting and analytics business use cases Should be able to work in Agile environment Should have strong communication skills Good to have : Exposure to Financial Services , Wealth and Asset Management Exposure to Data science, Exposure to Fullstack technologies GenAI will be an added advantage EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 3 weeks ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We are looking for a Staff Engineer to lead the design, development, and optimization of AI-powered platforms with a strong focus on Python , API development , and AWS AI services . You will be instrumental in shaping system architecture, mentoring engineers, and driving end-to-end solutions that leverage NLP , cloud services , and modern frontend frameworks . As a Staff Engineer, you’ll be a key technical leader partnering closely with product, design, and engineering teams to build scalable and intelligent systems. Key Responsibilities: Architect and build scalable, high-performance backend systems using Python Design robust RESTful APIs and guide the engineering team on best practices for performance and security Leverage AWS AI/ML services (e.g., Comprehend, Lex, SageMaker) to build intelligent features and capabilities Provide technical leadership on NLP solutions using libraries such as spaCy , transformers , or NLTK Ensure comprehensive unit testing across APIs and databases; advocate for clean, testable code Guide the development of full-stack features involving JavaScript , React , and Next.js Own and evolve system architecture, ensuring modularity, scalability, and resilience Promote strong engineering practices with Git , Bitbucket , and CI/CD tooling Collaborate cross-functionally to drive technical decisions aligned with product goals Mentor engineers across levels and foster a culture of technical excellence Technical Requirements: 8+ years of hands-on software development experience, primarily in Python Proven expertise in API development , system design, and performance tuning Strong background in AWS , particularly AI/ML and NLP services Experience building intelligent features using NLP frameworks Proficiency in front-end technologies : JavaScript, React, Next.js (preferred) Solid understanding of RDBMS (PostgreSQL, MySQL or similar) Expert in version control systems and collaborative workflows Track record of technical leadership, mentoring, and architectural ownership Preferred Qualifications: Experience with microservices , event-driven architectures , or serverless systems Familiarity with Docker , Kubernetes , and infrastructure-as-code tools Prior experience in leading cross-functional engineering initiatives
Posted 3 weeks ago
3.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : Python (Programming Language) Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : Bachelor of Engineering in Electronics or any related stream Summary: As an IoT Engineer with Python expertise, you will develop data-driven applications on AWS IoT for the client. Responsible for the creation of scalable data pipelines and algorithms to process and deliver actionable vehicle data insights. Roles & Responsibilities: 1. Lead the design and development of Python based applications and services 2. Architect and implement cloud-native solutions using AWS services 3. Collaborate with data scientists and analysts to implement data processing pipelines 4. Participate in architecture discussions and contribute to technical decision-making 5. Ensure the scalability, reliability, and performance of Python applications on AWS 6. Stay current with Python ecosystem developments, AWS services, and industry best practices. Professional & Technical Skills: 1. At least 3 years of experience in Python Programming with integration with AWS IoT core. 2. Exposure on database technologies (SQL and NoSQL) and API development. 3. Significant experience working with AWS services (e.g., EC2, S3, RDS, Lambda, SageMaker, EMR) and Infrastructure as Code (e.g., AWS CloudFormation, Terraform). 4. Exposure on Test-Driven Development (TDD) 5. Practices DevOps in software solution and well-versed with Agile methodologies. 6. AWS certification is a plus. 7. Have well-developed analytical skills, a person who is rigorous but pragmatic, being able to justify decisions with solid rationale. Additional Information: 1. The candidate should have a minimum of 7 years of experience in Python Programming. 2. This position is based at our Hyderabad office 3. A 15 years full time education is required (bachelor’s degree in computer science, Software Engineering, or related field). Bachelor of Engineering in Electronics or any related stream
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough