14 - 15 years
14 - 15 Lacs
Posted:5 days ago|
Platform:
On-site
Full Time
Candidates will : Work on cutting-edge Cloud Technologies, AI/ML, and data-driven solutions, be a part of a dynamic and innovative team driving digital transformation. Lead high-impact Agile initiatives with top talent in the industry. Get opportunity to grow and implement Agile at an enterprise level. Offered competitive compensation, flexible work culture, and learning opportunities. Roles and Responsibilities Create product roadmap and project plan. Design, develop, and maintain scalable ETL pipelines using Azure Services to process, transform, and load large datasets into Cloud platforms. Collaborate with cross-functional teams, including data architects, analysts, and business stakeholders, to gather data requirements and deliver efficient data solutions. Design, implement, and maintain data pipelines for data ingestion, processing, and transformation in Azure. Work together with data scientists/architects and analysts to understand the needs for data and create effective data workflows. Exposure to Snowflake Warehouse. Big Data Engineer with solid background with the larger Hadoop ecosystem and real-time analytics tools including PySpark/Scala-Spark/Hive/Hadoop CLI / MapReduce / Storm / Kafka / Lambda Architecture Implementing data validation and cleansing techniques. Improve the scalability, efficiency, and cost-effectiveness of data pipelines. Experience in designing and hands-on development in cloud-based analytics solutions. Expert level understanding on Azure Data Factory Azure Data Lake, Snowflake, Pyspark is required. Good to have exp in full Stack Development background with Java and JavaScript/CSS/HTML. Knowledge of ReactJs/Angular is a plus. Designing and building of data pipelines using API ingestion and Streaming ingestion methods. Unix/Linux expertise; comfortable with Linux operating system and Shell Scripting. Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is desirable. PL/SQL, RDBMS background with Oracle/MySQL Comfortable with microServices, CI/CD, Dockers, and Kubernetes Strong experience in common Data Vault data warehouse modelling principles. Creating/modifying Dockers and deploying them via Kubernetes. Additional Skills Required: The ideal candidate should have at least 14+ years of experience in IT along in addition to the following: Having 10+ years of extensive development experience using snowflake or similar data warehouse technology Having working experience with dbt and other technologies of the modern datastack, such as Snowflake, Azure, Databricks and Python, Experience in agile processes, such as SCRUM Extensive experience in writing advanced SQL statements and performance tuning. Experience in Data Ingestion techniques using custom or SAAS tool Experience in data modelling and can optimize existing/new data models Experience in data mining, data warehouse solutions, and ETL, and using databases in a business environment with large-scale, complex datasets Technical Qualifications: Preferred: Bachelors degree in Computer Science, Information Systems, or a related field. Experience in high-tech, software, or telecom industries is a plus. Strong analytical skills to translate insights into impactful product initiatives.
Att Communication Services
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
My Connections Att Communication Services
Bengaluru / Bangalore, Karnataka, India
14.0 - 15.0 Lacs P.A.
Hyderābād
1.944 - 8.765 Lacs P.A.
Gurgaon / Gurugram, Haryana, India
3.0 - 6.0 Lacs P.A.
Chennai, Tamil Nadu, India
4.0 - 9.0 Lacs P.A.
Bengaluru / Bangalore, Karnataka, India
2.0 - 7.0 Lacs P.A.
Hyderabad / Secunderabad, Telangana, Telangana, India
2.0 - 7.0 Lacs P.A.
Delhi, Delhi, India
2.0 - 7.0 Lacs P.A.
Bengaluru / Bangalore, Karnataka, India
5.0 - 8.0 Lacs P.A.
Bengaluru / Bangalore, Karnataka, India
10.0 - 14.0 Lacs P.A.
Mumbai, Maharashtra, India
6.0 - 11.0 Lacs P.A.