Home
Jobs

I&F Decision Sci Practitioner Analyst

3 - 5 years

5 - 7 Lacs

Posted:3 months ago| Platform: Naukri logo

Apply

Work Mode

Work from Office

Job Type

Full Time

Job Description

Skill required: Data Scientist - Data Science Designation: I&F Decision Sci Practitioner Analyst Qualifications: Diploma In CS and Engineering Years of Experience: 3 to 5 years What would you do? Data & AIIn this role, you will be working on the interdisciplinary field about scientific methods, processes and systems to extract knowledge or insights from data in various forms, either structured or unstructured. What are we looking for? Construct, design, and put into practice scalable data pipelines with PySpark, and other pertinent technologies Build and maintain data lakes and data warehouses, ensuring data quality, integrity, and availability Process and transform large datasets using PySpark to support analytics and business intelligence initiatives Develop and deploy Docker containers to package and manage data processing applications Work together to comprehend data requirements and provide solid data solutions with data scientists, analysts, and other stakeholders Monitor and optimize data pipelines for performance, scalability, and reliability Implement data governance and security practices to protect sensitive information Experience with Linux/Unix Systems and Scripting Experience with Version Control Systems, such as Git Knowledge on scheduling job using informatica ETL, and Autosys, as required Create and implement containerization using Kubernetes and orchestration using Docker containers for data processing application packaging and management Agility for quick learning Commitment to quality Written and verbal communication Adaptable and flexible Ability to work well in a team Roles and Responsibilities: Design and develop new data pipelines using Databricks notebooks and ADLS Gen2.0 to ingest and process raw data efficiently and ensuring reliability and scalability of the pipeline. Combines data gathering to model creation, use Databricks Notebooks to unify the process and instantly deploy to production. Integrates with a wide variety of data stores and services such as Azure SQL Data Warehouse, Azure Cosmos DB, Azure Data Lake Store, Azure Blob storage, Azure Event Hubs, Azure IoT Hub, and Azure Data Factory. Utilize Databricks and Delta/parquet tables to optimize the performance of both new and existing data processing pipeline to reduce job run time and improve efficiency. Maintain the data platform focusing on process monitoring, troubleshooting, and data readiness, ensuring high-quality data for regular reporting and system optimization. Work with other data engineers to design and implement enhancements to the overall data platform, improving functionality and performance. Work independently on end-to-end implementation of data processing pipelines, from development-testing-deployment using Databricks workflow. Should be proficient in PySpark operations to extract-transform-load data from/to Azure Delta Lake, creating reports to support business request to meet multiple priorities in given time. Qualifications Diploma In CS and Engineering

Mock Interview

Practice Video Interview with JobPe AI

Start Kubernetes Interview Now

My Connections Accenture

Download Chrome Extension (See your connection in the Accenture )

chrome image
Download Now
Accenture
Accenture

Professional Services

Dublin

600,000+ Employees

36723 Jobs

    Key People

  • Julie Sweet

    Chairman & Chief Executive Officer
  • KC Choi

    Global Lead for Technology & Chief Operating Officer

RecommendedJobs for You