Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together.
We are seeking an experienced and highly skilled Principal Data Scientist to join our team as an AI Builder. The ideal candidate will have a strong background in data science, machine learning, and AI, with a proven track record of developing and deploying advanced AI models to solve complex healthcare challenges. This role requires leadership, innovation, and the ability to drive AI initiatives from concept to production.
Key Responsibilities:
- Lead and conduct advanced research in AI/ML to drive innovation in healthcare.
- Design, develop, and implement machine learning models and algorithms to solve complex healthcare problems.
- Lead the delivery of high-impact analytics solutions, integrating advanced AI and Generative AI components to support business use cases within the Analytics group
- Design and manage development of modular, reusable, and maintainable software supporting the Quality organization and strategic analytics initiatives
- Maintain hands-on involvement in solution development, ensuring rapid response to bugs and security vulnerabilities across owned code repositories
- Apply and promote software engineering best practices, fostering technical excellence across the engineering community
- Collaborate extensively with teams across Security, Compliance, Engineering, Product Management, Service Management, and Business Operations to ensure alignment and successful execution
- Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so
Key Skills:
- Expert-level Python for production systems.
- In-depth knowledge of LLM vulnerabilities and implementation of guardrails.
- Proven experience working with AWS SageMaker for model development, deployment, and monitoring.
- Deploy applications in orchestrated environments like Kubernetes (AKS) using Docker.
- Apply explainable AI (AXI) tools such as LIME and SHAP to ensure transparency and interpretability of AI models.
- Apply deep learning algorithm techniques, open-source tools, and technologies.
- Implement classical machine learning algorithms such as Logistic Regression,
Decision Trees, Clustering (K-means, Hierarchical, and Self-organizing Maps), TSNE, PCA, Bayesian models, Time Series ARIMA/ARMA, and Recommender Systems.
- Develop deep learning algorithms like Random Forest, GBM, KNN, SVM, Bayesian, Text Mining techniques, Multilayer Perceptron, Neural Networks (Feedforward, CNN, LSTMs, GRUs).
- Experience with cloud platforms (AWS preferred), including SageMaker.
- Proficiency in statistical analysis tools and libraries (e.g., NumPy, Pandas, PyMC3, or similar)
- Hands-on experience with FastAPI and Pydantic for building microservices.
Qualifications:
- Bachelors or equivalent experience in Computer Science, Data Science, AI/ML, or a related field.
- 8+ years of proven experience in leading AI/ML research projects and teams.
- Strong programming skills in Python, R, and SQL.
- Excellent problem-solving skills and the ability to work independently and collaboratively.
- Strong communication skills and the ability to present complex technical concepts to non-technical stakeholders.
- Familiarity with AIML governance, ethics, and responsible AI practices
Key Skills:
- Proficiency AutoML: Automated Machine Learning (AutoML) tools like Azure ML Studio, Google Cloud AutoML, and DataRobotStrong statistical and mathematical knowledge -
Must have
- Deep Learning algorithm techniques, open-source tools and technologies, statistical tools, and programming environments such as Python, PySpark and SQL -
Must have
- Classical Machine Learning Algorithms like Logistic Regression, Decision trees, Clustering (K-means, Hierarchical and Self-organizing Maps), TSNE, PCA, Bayesian models, Time Series ARIMA/ARMA, Recommender Systems - Collaborative Filtering, FPMC, FISM, Fossil –
Must have
- Deep Learning algorithm techniques like Random Forest, GBM, KNN, SVM, Bayesian, Text Mining techniques, Multilayer Perceptron, Neural Networks – Feedforward, CNN, LSTM’s GRU’s is a plus. Optimization techniques – Activity regularization (L1 and L2), Adam, Adagrad, Adadelta concepts; Cost Functions in Neural Nets – Contrastive Loss, Hinge Loss, Binary Cross entropy, Categorical Cross entropy; developed applications in KRR, NLP, Speech or Image processing –
Must
Have
- Deep Learning frameworks for Production Systems like TensorFlow, Keras (for RPD and neural net architecture evaluation), PyTorch and Xgboost, Caffe, and Theono –
Must Have
- Exposure or experience using code version management tools such as: GitHub -
Must have
- Synthetic Data Generation: Tools like Gretel.ai and Synthea are used to generate synthetic data, which can be useful for training models when real data is scarce or sensitive - Good
to Have
- Generative AI experience - LLMs, RAG Architecture(Embeddings(Azure Ai Search, DataBricks) , Semantic Search Etc. –
Must Have
- Production Grade solution architecture for RAG based AI Solutions –
Good to have