Data Scientist with Python

12 - 15 years

20 - 25 Lacs

Bengaluru

Posted:3 weeks ago| Platform: Naukri logo

Apply Now

Skills Required

Data management Coding cassandra Machine learning Data collection Data processing Release management Distribution system Analytics Python

Work Mode

Work from Office

Job Type

Full Time

Job Description

About this opportunity: This position plays a crucial role in the development of Python-based solutions, their deployment within a Kubernetes-based environment, and ensuring the smooth data flow for our machine learning and data science initiatives. The ideal candidate will possess a strong foundation in Python programming, hands-on experience with ElasticSearch, Logstash, and Kibana (ELK), a solid grasp of fundamental Spark concepts, and familiarity with visualization tools such as Grafana and Kibana. Furthermore, a background in ML Ops and expertise in both machine learning model development and deployment will be highly advantageous What you will do: Generative AI & LLM Development, 12-15 Yrs of experience as Enterprise Software Architect with strong hands-on experience Strong hands-on experience in Python and microservice architecture concepts and development Expertise in crafting technical guides, architecture designs for AI platform Experience in Elastic Stack , Cassandra or any Big Data tool Experience with advance distributed systems and tooling, for example, Prometheus, Terraform, Kubernetes, Helm, Vault, CI/CD systems. Prior experience to build multiple AI/ML based models and deployed the models into production environment and creating the data pipelines Experience in guiding teams working on AI, ML, BigData and Analytics Strong understanding of development practices like architecture design, coding, test and verification. Experience with delivering software products, for example release management, documentation What you will Bring: Python Development: Write clean, efficient, and maintainable Python code to support data engineering tasks, including data collection, transformation, and integration with machine learning models. Data Pipeline Development: Design, develop, and maintain robust data pipelines that efficiently gather, process, and transform data from various sources into a format suitable for machine learning and data science tasks using ELK stack, Python and other leading technologies. Spark Knowledge: Apply basic Spark concepts for distributed data processing when necessary, optimizing data workflows for performance and scalability. ELK Integration: Utilize ElasticSearch, Logstash, and Kibana (ELK) for data management, data indexing, and real-time data visualization. Knowledge of OpenSearch and related stack would be beneficial. Grafana and Kibana: Create and manage dashboards and visualizations using Grafana and Kibana to provide real-time insights into data and system performance. Kubernetes Deployment: Deploy data engineering solutions and machine learning models to a Kubernetes-based environment, ensuring security, scalability, reliability, and high availability. Why join Ericsson? What happens once you apply? Primary country and city: India (IN) || Bangalore Req ID: 766747

Cradlepoint
Cradlepoint

Networking and Telecommunications

Boise

500+ Employees

201 Jobs

    Key People

  • George Mulhern

    CEO
  • Jeroen S. van Kooten

    Chief Financial Officer

RecommendedJobs for You