Role & responsibilities Senior Snowflake developer/006IN37410 Design, develop, and maintain data pipelines using Snowflake, DBT, and cloud services (AWS/Azure/GCP). Implement data modeling, schema design, and performance tuning in Snowflake. Build and optimize ETL/ELT workflows for structured and semi-structured data. Collaborate with cross-functional teams to understand data requirements and deliver solutions. Ensure data quality, governance, and security compliance across the data ecosystem. Automate workflows using Snowflake features (Tasks, Streams, Procedures). Troubleshoot and optimize existing data workflows for efficiency and reliability. Support migration of legacy data warehouses to Snowflake. Stay updated with latest industry trends and Snowflake features to continuously improve infrastructure. Required Skills & Qualifications Strong expertise in Snowflake Cloud Data Platform . Proficiency in SQL and data warehousing concepts . Experience with ETL/ELT tools (DBT, Informatica, Talend, Matillion, Fivetran). Knowledge of cloud platforms (AWS, Azure, GCP). Familiarity with Python/Scala/Java for scripting and automation. Strong problem-solving skills and attention to detail. Bachelors degree in Computer Science, IT, or related field. Preferred Experience Exposure to big data technologies (Spark, Hadoop). Experience with BI tools (Tableau, Power BI, Looker). Knowledge of CI/CD pipelines and DevOps practices. Prior experience in data migration projects to Snowflake. Must Skills Snowflake Datavault Certification: Data Vault Practitioner Snowflake SnowPro Core Certification Preferred candidate profile
Job Title: Senior Data Engineer (Databricks) Experience: 6 - 8 Location: Only Hyderabad Timings: 1 PM to 10 PM Work mode: 5 days to office . Bachelor's in Computer Science, Computer Engineering or related field . 6+ yrs. Development experience with Spark (PySpark), Python and SQL. . Extensive knowledge building data pipelines . Hands on experience with Databricks Devlopment . Strong experience with . Strong experience developing on Linux OS. . Experience with scheduling and orchestration (e.g. Databricks Workflows,airflow, prefect, control-m). . Solid understanding of distributed systems, data structures, design principles. . Agile Development Methodologies (e.g. SAFe, Kanban, Scrum). . Comfortable communicating with teams via showcases/demos . Play key role in establishing and implementing migration patterns for the Data Lake Modernization project. . Actively migrate use cases from our on premises Data Lake to Databricks on GCP. . Collaborate with Product Management and business partners to understand use case requirements and reporting. . Adhere to internal development best practices/lifecycle (e.g. Testing, Code Reviews, CI/CD, Documentation) . . Document and showcase feature designs/workflows. . Participate in team meetings and discussions around product development. . Stay up to date on industry latest industry trends and design patterns. . 2+ years experience with GIT. . 2+ years experience with CI/CD (e.g. Azure Pipelines). . Experience with streaming technologies, such as Kafka, Spark. . Experience building applications on Docker and Kubernetes. . Cloud experience (e.g. Azure, Google).
Role & responsibilities SAP ABAP HANA DEVELOPER Preferred candidate profile
FIND ON MAP