Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
You are a highly skilled Data Modeler with expertise in Iceberg and Snowflake, responsible for designing and optimizing data models for scalable and efficient data architectures. Working closely with cross-functional teams, you ensure data integrity, consistency, and performance across platforms. Your key responsibilities include designing and implementing robust data models tailored to meet business and technical requirements. Leveraging Starburst, Iceberg, and Snowflake, you build scalable and high-performance data architectures. You optimize query performance and ensure efficient data storage strategies. Collaboration with data engineering and BI teams is essential to define data requirements and align them with business objectives. Additionally, you conduct data profiling, analysis, and quality assessments to maintain data accuracy and reliability. Documenting and maintaining data lineage and governance processes is also part of your role. Keeping updated on emerging technologies and industry best practices for data modeling and analytics is crucial. Qualifications: - Bachelor's or master's degree in computer science, Data Science, or a related field. - 5+ years of experience in data modeling, data architecture, and database design. - Hands-on expertise with Starburst, Iceberg, and Snowflake platforms. - Strong SQL skills and experience with ETL/ELT workflows. - Familiarity with data lakehouse architecture and modern data stack principles. - Knowledge of data governance, security, and compliance practices. - Excellent problem-solving and communication skills. Preferred Skills: - Experience with other BI and analytics tools like Tableau, Qlik Sense, or Power BI. - Knowledge of cloud platforms like AWS, Azure, or GCP. - Knowledge of Hadoop. - Familiarity with data virtualization and federation tools.,
Posted 3 days ago
6.0 - 10.0 years
0 Lacs
chandigarh
On-site
As a Data Architect with over 6 years of experience, you will be responsible for designing and implementing modern data lakehouse architectures on cloud platforms such as AWS, Azure, or GCP. Your primary focus will be on defining data modeling, schema evolution, partitioning, and governance strategies to ensure high-performance and secure data access. In this role, you will own the technical roadmap for scalable data platform solutions, ensuring they are aligned with enterprise needs and future growth. You will also provide architectural guidance and conduct code/design reviews across data engineering teams to maintain high standards of quality. Your responsibilities will include building and maintaining reliable, high-throughput data pipelines for the ingestion, transformation, and integration of structured, semi-structured, and unstructured data. You should have a solid understanding of data warehousing concepts, ETL/ELT pipelines, and data modeling. Experience with tools like Apache Spark (PySpark/Scala), Hive, DBT, and SQL for large-scale data transformation is essential for this role. You will be required to design ETL/ELT workflows using orchestration tools such as Apache Airflow, Temporal, or Apache NiFi. In addition, you will lead and mentor a team of data engineers, providing guidance on code quality, design principles, and best practices. As a subject matter expert in data architecture, you will collaborate with DevOps, Data Scientists, Product Owners, and Business Analysts to understand data requirements and deliver solutions that meet their needs.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
You are looking for a GCP Cloud Engineer for a position based in Pune. As a GCP Data Engineer, you will be responsible for designing, implementing, and optimizing data solutions on Google Cloud Platform. Your expertise in GCP services, solution design, and programming skills will be crucial for developing scalable and efficient cloud solutions. Your key responsibilities will include designing and implementing GCP-based data solutions following best practices, developing workflows and pipelines using Cloud Composer and Apache Airflow, building and managing data processing clusters using Dataproc, working with GCP services like Cloud Functions, Cloud Run, and Cloud Storage, and integrating multiple data sources through ETL/ELT workflows. You will be expected to write clean, efficient, and scalable code in languages such as Python, Java, or similar, apply logical problem-solving skills to address business challenges, and collaborate with stakeholders to design end-to-end GCP solution architectures. To be successful in this role, you should have hands-on experience with Dataproc, Cloud Composer, Cloud Functions, and Cloud Run, strong programming skills in Python, Java, or similar languages, a good understanding of GCP architecture, and experience in setting task dependencies in Airflow DAGs. Logical and analytical thinking, strong communication, and documentation skills are also essential for cross-functional collaboration. Preferred qualifications include GCP Professional Data Engineer or Architect Certification, experience in data lake and data warehouse solutions on GCP (e.g., BigQuery, Dataflow), and familiarity with CI/CD pipelines for GCP-based deployments.,
Posted 3 weeks ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
You are an experienced IICS (Informatica Intelligent Cloud Services) Developer with a strong background in the IICS platform. You possess in-depth knowledge of Snowflake and excel in creating and managing integrations across various systems and databases. Your role involves collaborating on cloud-based integration solutions, ensuring seamless data flow between platforms, and optimizing performance for large-scale data processes. Your primary responsibilities include designing, developing, and implementing data integration solutions using IICS. You will work extensively with Snowflake data warehouse solutions, handling tasks such as data loading, transformation, and querying. Building, monitoring, and maintaining efficient data pipelines between cloud-based systems and Snowflake will be crucial. Troubleshooting and resolving integration issues within the IICS platform and Snowflake are part of your routine tasks. You will also focus on ensuring optimal data processing performance and managing data flow among different cloud applications and databases. Collaboration with data architects, analysts, and stakeholders to gather requirements and design integration solutions is essential. Implementation of best practices for data governance, security, and data quality within the integration solutions is a key aspect of your role. Additionally, you will conduct unit testing and debugging of IICS data integration tasks and optimize integration workflows to meet performance and scalability requirements. Your skill set encompasses hands-on experience with IICS, a strong understanding of Snowflake as a cloud data warehouse, and proficiency in building ETL/ELT workflows. You are adept at integrating various data sources into Snowflake and possess experience with SQL for writing complex queries for data transformation and manipulation. Familiarity with data integration techniques and best practices for cloud-based platforms, along with experience in working with RESTful APIs and other integration protocols, is vital. Your ability to troubleshoot, optimize, and maintain data pipelines effectively, coupled with knowledge of data governance, security principles, and data quality standards, sets you apart. In terms of qualifications, you hold a Bachelor's degree in Computer Science, Information Technology, or a related field, or possess equivalent experience. You have a minimum of 5 years of experience in data integration development, with proficiency in Snowflake and cloud-based data solutions. A strong understanding of ETL/ELT processes and integration design principles, as well as experience working in Agile or similar development methodologies, further enhance your profile.,
Posted 1 month ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
We empower our people to stay resilient and relevant in a constantly changing world. We are looking for individuals who are always seeking creative ways to grow and learn, individuals who aspire to make a real impact, both now and in the future. If this resonates with you, then you would be a valuable addition to our dynamic international team. We are currently seeking a Senior Software Engineer - Data Engineer (AI Solutions). In this role, you will have the opportunity to: - Design, build, and maintain data pipelines to cater to the requirements of various stakeholders, including software developers, data scientists, analysts, and business teams. - Ensure that the data pipelines are modular, resilient, and optimized for performance and low maintenance. - Collaborate with AI/ML teams to support training, inference, and monitoring needs through structured data delivery. - Implement ETL/ELT workflows for structured, semi-structured, and unstructured data using cloud-native tools. - Work with large-scale data lakes, streaming platforms, and batch processing systems to ingest and transform data. - Establish robust data validation, logging, and monitoring strategies to uphold data quality and lineage. - Optimize data infrastructure for scalability, cost-efficiency, and observability in cloud-based environments. - Ensure adherence to governance policies and data access controls across projects. To excel in this role, you should possess the following qualifications and skills: - A Bachelor's degree in Computer Science, Information Systems, or a related field. - Minimum of 4 years of experience in designing and deploying scalable data pipelines in cloud environments. - Proficiency in Python, SQL, and data manipulation tools and frameworks such as Apache Airflow, Spark, dbt, and Pandas. - Practical experience with data lakes, data warehouses (e.g., Redshift, Snowflake, BigQuery), and streaming platforms (e.g., Kafka, Kinesis). - Strong understanding of data modeling, schema design, and data transformation patterns. - Experience with AWS (Glue, S3, Redshift, Sagemaker) or Azure (Data Factory, Azure ML Studio, Azure Storage). - Familiarity with CI/CD for data pipelines and infrastructure-as-code (e.g., Terraform, CloudFormation). - Exposure to building data solutions that support AI/ML pipelines, including feature stores and real-time data ingestion. - Understanding of observability, data versioning, and pipeline testing tools. - Previous engagement with diverse stakeholders, data requirement gathering, and support for iterative development cycles. - Background or familiarity with the Power, Energy, or Electrification sector is advantageous. - Knowledge of security best practices and data compliance policies for enterprise-grade systems. This position is based in Bangalore, offering you the opportunity to collaborate with teams that impact entire cities, countries, and shape the future. Siemens is a global organization comprising over 312,000 individuals across more than 200 countries. We are committed to equality and encourage applications from diverse backgrounds that mirror the communities we serve. Employment decisions at Siemens are made based on qualifications, merit, and business requirements. Join us with your curiosity and creativity to help shape a better tomorrow. Learn more about Siemens careers at: www.siemens.com/careers Discover the Digital world of Siemens here: www.siemens.com/careers/digitalminds,
Posted 1 month ago
8.0 - 12.0 years
0 Lacs
pune, maharashtra
On-site
As a Tech Lead, Data Architecture at Fiserv, you will play a crucial role in our data warehousing strategy and implementation. Your responsibilities will include designing, developing, and leading the adoption of Snowflake-based solutions to ensure efficient and secure data systems that drive our business analytics and decision-making processes. Collaborating with cross-functional teams, you will define and implement best practices for data modeling, schema design, and query optimization in Snowflake. Additionally, you will develop and manage ETL/ELT workflows to ingest, transform, and load data from various resources into Snowflake, integrating data from diverse systems like databases, APIs, flat files, and cloud storage. Monitoring and tuning Snowflake performance, you will manage caching, clustering, and partitioning to enhance efficiency while analyzing and resolving query performance bottlenecks. You will work closely with data analysts, data engineers, and business users to understand reporting and analytic needs, ensuring seamless integration with BI Tools like Power BI. Your role will also involve collaborating with the DevOps team for automation, deployment, and monitoring, as well as planning and executing strategies for scaling Snowflake environments as data volume grows. Keeping up to date with emerging trends and technologies in data warehousing and data management is essential, along with providing technical support, troubleshooting, and guidance to users accessing the data warehouse. To be successful in this role, you must have 8 to 10 years of experience in data management tools like Snowflake, Streamsets, and Informatica. Experience with monitoring tools like Dynatrace and Splunk, Kubernetes cluster management, and Linux OS is required. Additionally, familiarity with containerization technologies, cloud services, CI/CD pipelines, and banking or financial services experience would be advantageous. Thank you for considering employment with Fiserv. To apply, please use your legal name, complete the step-by-step profile, and attach your resume. Fiserv is committed to diversity and inclusion and does not accept resume submissions from agencies outside of existing agreements. Beware of fraudulent job postings not affiliated with Fiserv to protect your personal information and financial security.,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As a Data Engineer at our organization, you will play a crucial role in our data team by utilizing your expertise in Python and PySpark to design, develop, and maintain scalable data pipelines and infrastructure. Your responsibilities will involve powering our analytics and machine learning initiatives through the creation of robust and high-performance data workflows. With a minimum of 5 years of experience in data engineering or a related field, you will be expected to design and implement data pipelines using PySpark and Python, develop ETL/ELT workflows for data ingestion, transformation, and loading, and optimize data processing jobs for enhanced performance and cost-efficiency in distributed environments. Collaboration with data scientists, analysts, and business stakeholders to comprehend data requirements will be a key aspect of your role. Ensuring data quality, integrity, and governance across all pipelines, monitoring and troubleshooting production data workflows, and proactively resolving issues will be part of your daily tasks. Leveraging Azure for data storage, processing, and orchestration is essential, along with a strong proficiency in Python for data manipulation and scripting, hands-on experience in PySpark and distributed data processing, and a solid understanding of SQL and relational databases. Additionally, familiarity with data modeling, data warehousing, and performance tuning, as well as experience with version control systems like Git and CI/CD pipelines, will be beneficial in excelling in this role. Join our growing data team and contribute to the transformation of businesses globally through the application of next-generation technology. About Mphasis: Mphasis is dedicated to leveraging cutting-edge technology to drive global business transformations. With a customer-centric approach, Mphasis focuses on providing hyper-personalized digital experiences through cloud and cognitive technologies. The Mphasis Service Transformation approach aids businesses in adapting to a changing world by applying digital technologies across legacy environments. The organization's core reference architectures, tools, and domain expertise facilitate innovation and speed, leading to strong client relationships. Mphasis is an Equal Opportunity Employer committed to providing reasonable accommodations for individuals with disabilities. If you require assistance due to a disability when searching and applying for career opportunities, please contact us at accomodationrequest@mphasis.com with your contact details and the nature of your request.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
As an experienced IICS Developer, you will be responsible for supporting a critical data migration project from Oracle to Snowflake. This remote opportunity requires working night-shift hours to align with the U.S. team. Your primary focus will be on developing and optimizing ETL/ELT workflows, collaborating with architects/DBAs for schema conversion, and ensuring data quality, consistency, and validation throughout the migration process. To excel in this role, you must possess strong hands-on experience with IICS (Informatica Intelligent Cloud Services), a solid background in Oracle databases (including SQL, PL/SQL, and data modeling), and a working knowledge of Snowflake, specifically data staging, architecture, and data loading. Your responsibilities will also include building mappings, tasks, and parameter files in IICS, as well as understanding data pipeline performance tuning to enhance efficiency. In addition, you will be expected to implement error handling, performance monitoring, and scheduling to support the migration process effectively. Your role will extend to providing assistance during the go-live phase and post-migration stabilization to ensure a seamless transition. This position offers the flexibility of engagement as either a Contract or Full-time role, based on availability and fit. If you are looking to apply your expertise in IICS development to contribute to a challenging data migration project, this opportunity aligns with your skill set and availability. The shift timings for this role are from 7:30 PM IST to 1:30 AM EST, allowing you to collaborate effectively with the U.S. team members.,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As a Data Engineer at our company, you will play a crucial role in our data team by designing, building, and maintaining scalable data pipelines and infrastructure using your expertise in Python and PySpark. With a focus on supporting our analytics and machine learning initiatives, you will collaborate with various stakeholders to ensure data quality, integrity, and governance across all pipelines. You should have at least 5 years of experience in data engineering or a related field to excel in this role. Your responsibilities will include developing ETL/ELT workflows, optimizing data processing jobs for performance and cost-efficiency, and monitoring production data workflows to proactively resolve any issues that may arise. Your strong proficiency in Python, hands-on experience with PySpark, and solid understanding of SQL and relational databases will be essential in this position. Additionally, you will leverage Azure for data storage, processing, and orchestration, while also demonstrating familiarity with data modeling, data warehousing, and performance tuning. Experience with version control tools like Git and CI/CD pipelines will be beneficial for ensuring efficient workflow management. Joining our team means being part of a company that applies next-generation technology to help enterprises globally transform their businesses. At Mphasis, we prioritize customer centricity and embrace a Front2Back Transformation approach that leverages cloud and cognitive technologies to deliver hyper-personalized digital experiences. Our commitment to innovation, speed, and specialization enables us to build strong relationships with our clients. Mphasis is an Equal Opportunity Employer and is dedicated to providing reasonable accommodations to individuals with disabilities. If you require assistance or accommodation to pursue a career opportunity with us, please contact us at accomodationrequest@mphasis.com with your contact information and the nature of your request.,
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
64580 Jobs | Dublin
Wipro
25801 Jobs | Bengaluru
Accenture in India
21267 Jobs | Dublin 2
EY
19320 Jobs | London
Uplers
13908 Jobs | Ahmedabad
Bajaj Finserv
13382 Jobs |
IBM
13114 Jobs | Armonk
Accenture services Pvt Ltd
12227 Jobs |
Amazon
12149 Jobs | Seattle,WA
Oracle
11546 Jobs | Redwood City