- Managing large machine learning applications and designing and implementing new frameworks to build scalable and efficient data processing workflows and machine learning pipelines
- Build the tightly integrated pipeline that optimizes and compiles models and then orchestrates their execution
- Collaborate with CPU GPU and Neural Engine hardware backends to push inference performance and efficiency
- Work closely with feature teams to facilitate and debug the integration of increasingly sophisticated models including large language models
- Automate data processing and extraction
- Engage with sales team to find opportunities understand requirements and translate those requirements into technical solutions
- Develop reusable ML models and assets into production
Key Responsibilities:
- Excellent Python programming and debugging skills
- Refer to Pytho JD given below
- Proficiency with SQL relational databases non relational databases
- Passion for API design and software architecture
- Strong communication skills and the ability to naturally explain difficult technical topics to everyone from data scientists to engineers to business partners
- Experience with modern neural network architectures and deep learning libraries Keras TensorFlow PyTorch
- Experience unsupervised ML algorithms
- Experience in Timeseries models and Anomaly detection problems
- Experience with modern large language model Chat GPT BERT and applications
- Expertise with performance optimization
- Experience or knowledge in public cloud AWS services S3 Lambda
- Familiarity with distributed databases such as Snowflake Oracle
- Experience with containerization and orchestration technologies such as Docker and Kubernetes
Technical Requirements:
- 2 years Python Data Science skillset
- 5 years of experience with building data pipelines data processing and reporting using Python
- Python libraries numpy Pandas matplotlib seaborn Scikitlearn Data Science Data Manipulation Wrangling time series forecasting etc prior experience in any data science project building data pipelines data processing and reporting
- Experience of using Agile based development methodologies
- Good understanding software development and Enterprise architecture patterns
- Hands on experience with open source big data technologies
- Data processing experience with Streaming data wrangling crawling using Python libraries
- Expertise and understanding of common methods in data transformation
- Ability to understand API Specs determine relevant API calls
- ETL i
- e
- Extract Transform Load data and implement SQL friendly data structures
Additional Responsibilities:
- Python application development 4 in python scala
- Python experience in using file operation OOPS multiprocessing and threading
- SQL and PL SQL development
- RDBMS and Data modelling
- Experience in REST service development
- Experience in docker and kubernetes deployments
- Basic idea of Cloud architecture and services
Preferred Skills:
Technology->Machine Learning->Python,Technology->Database->Microsoft SQL Server,Technology->Cloud Platform->Amazon Webservices Managed Services,Technology->Data on Cloud-DataStore->Snowflake