Jobs
Interviews

55 Etlelt Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

10.0 - 15.0 years

0 Lacs

chennai, tamil nadu

On-site

The Data Architect is responsible for designing, creating, deploying, and managing the organization's data architecture. This involves defining how the data will be stored, consumed, integrated, and managed by different data entities and IT systems, along with any applications utilizing or processing that data. As the Data Architect, your key responsibilities will include leading and managing the entire Offshore Data Team, designing and implementing effective database solutions and models, examining and identifying database structural necessities, assessing database implementation procedures for compliance with regulations, overseeing data migration, monitoring system performance, recommending solutions for database systems improvement, and educating staff members through training and individual support. To excel in this role, you must possess proficiency in Azure SQL, Azure Data Factory, Azure Data Lake, Azure Databricks, Azure Synapse, T-SQL, DBT, Snowflake, BigQuery, Databricks, ETL/ELT, PySpark, Data Modeling, Data Warehousing, Data Analytics, Data Visualization, Power BI, Tableau, Agile Scrum, IT Service Management, Client Services, and Techno-Functional Management. Your expertise should extend to database management, including SQL and NoSQL databases, database design and implementation, data modeling encompassing conceptual, logical, and physical models, data integration from various sources to ensure consistency and quality, data warehousing concepts and solutions, data governance principles like data quality management and compliance, cloud platforms such as AWS, Azure, or Google Cloud, data visualization tools like Tableau and Power BI, programming languages like Python, R, or Java for data processing, strong analytical and problem-solving skills, excellent communication skills, project management abilities, and strong leadership skills to guide and mentor team members. Overall, as the Data Architect, you will play a crucial role in shaping and managing the data architecture of the organization, ensuring efficient data storage, retrieval, integration, and management processes while leveraging a wide range of technical skills and expertise to drive data initiatives and deliver actionable insights to stakeholders.,

Posted 21 hours ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

You will be responsible for designing and implementing scalable data models using Snowflake to support business intelligence and analytics solutions. This will involve implementing ETL/ELT solutions with complex business transformations and handling end-to-end data warehousing solutions. Additionally, you will be tasked with migrating data from legacy systems to Snowflake systems and writing complex SQL queries to extract, transform, and load data with a focus on high performance and accuracy. Your role will also include optimizing SnowSQL queries for better processing speeds and integrating Snowflake with 3rd party applications. To excel in this role, you should have a strong understanding of Snowflake architecture, features, and best practices. Experience in using Snowpipe and Snowpark/Streamlit, as well as familiarity with cloud platforms such as AWS, Azure, or GCP and other cloud-based data technologies, will be beneficial. Knowledge of data modeling concepts like star schema, snowflake schema, and data partitioning is essential. Experience with tools like dbt, Matillion, or Airbyte for data transformation and automation is preferred, along with familiarity with Snowflake's Time Travel, Streams, and Tasks features. Proficiency in data pipeline orchestration using tools like Airflow or Prefect, as well as scripting and automation skills in Python or Java, are required. Additionally, experience with data visualization tools like Tableau, Power BI, QlikView/QlikSense, or Looker will be advantageous.,

Posted 22 hours ago

Apply

4.0 - 8.0 years

0 Lacs

hyderabad, telangana

On-site

As a Data Engineer at our company, you will collaborate with a world-class team to drive the telecom business to its full potential. We are focused on building data products for telecom wireless and wireline business segments, including consumer analytics and telecom network performance. Our projects involve cutting-edge technologies like digital twin to develop analytical platforms and support AI and ML implementations. Your responsibilities will include working closely with business product owners, data scientists, and system architects to develop strategic data solutions from various sources such as batch, file, and data streams. You will be utilizing your expertise in ETL/ELT processes to integrate structured and unstructured data into our data warehouse and data lake for real-time streaming and batch processing, enabling data-driven insights and analytics for business teams within Verizon. Key Responsibilities: - Understanding business requirements and translating them into technical designs. - Data ingestion, preparation, and transformation. - Developing data streaming applications. - Resolving production failures and identifying solutions. - Working on ETL/ELT development. - Contributing to DevOps pipelines and understanding the DevOps process. Qualifications: - Bachelor's degree or four or more years of work experience. - Four or more years of relevant work experience. - Proficiency in Data Warehouse concepts and Data Management lifecycle. - Experience with Big Data technologies such as GCP, Hadoop, Spark, Composer, DataFlow, and BigQuery. - Strong skills in complex SQL. - Hands-on experience with streaming ETL pipelines. - Expertise in Java programming. - Familiarity with MemoryStore, Redis, and Spanner. - Ability to troubleshoot data issues. - Knowledge of data pipeline and workflow management tools. - Understanding of Information Systems and their applications in data management. Preferred Qualifications: - Three or more years of relevant experience. - Certification in ETL/ELT development or GCP-Data Engineer. - Strong attention to detail and accuracy. - Excellent problem-solving, analytical, and research skills. - Effective verbal and written communication abilities. - Experience in presenting to and influencing stakeholders. - Previous experience in leading a small technical team for project delivery. If you are passionate about new technologies and enjoy applying your technical expertise to solve business challenges, we encourage you to apply for this exciting opportunity to work on innovative data projects in the telecom industry.,

Posted 23 hours ago

Apply

1.0 - 5.0 years

0 Lacs

hyderabad, telangana

On-site

Your journey at Crowe starts here with an opportunity to build a meaningful and rewarding career. At Crowe, you will enjoy real flexibility to balance work with life moments while being trusted to deliver results and make a significant impact. You will be embraced for who you are, your well-being will be cared for, and your career will be nurtured. Equitable access to opportunities for career growth and leadership is available to everyone. With a history spanning over 80 years, delivering excellent service through innovation is ingrained in our DNA across audit, tax, and consulting groups. We continuously invest in innovative ideas like AI-enabled insights and technology-powered solutions to enhance our services. Join us at Crowe and embark on a career where you can contribute to shaping the future of our industry. As a Data Engineer at Crowe, you will play a crucial role in providing integration infrastructure for analytical support and solution development for the broader Enterprise. Leveraging your expertise in API integration, pipelines or notebooks, programming languages (such as Python, Spark, T-SQL), dimensional modeling, and advanced data engineering techniques, you will create and deliver robust solutions and data products. The ideal candidate will possess deep expertise in API integration and configuration, infrastructure development, data modeling and analysis. You will be responsible for designing, developing, and maintaining the Enterprise Analytics Platform to facilitate data-driven decision-making across the organization. Success in this role hinges on a strong interest and passion in data analytics, ETL/ELT best practices, critical thinking, problem-solving, as well as excellent interpersonal, communication, listening, and presentation skills. The Data team at Crowe aims for an unparalleled client experience and will rely on you to promote our success and image firmwide. Qualifications for this position include a Bachelor's degree in computer science, Data Analytics, Data/Information Science, Information Systems, Mathematics, or related fields. You should have 3+ years of experience with SQL and data warehousing concepts supporting Business Intelligence, data analytics, and reporting, along with 2+ years of experience coding in Python, PySpark, and T-SQL (or other programming languages) using Notebooks. Additionally, 2+ years of experience managing projects from inception to execution, 1+ years of experience with Microsoft Power BI (including DAX, Power Query, and M language), and 1+ years of hands-on experience working with Delta Lake or Apache Spark (Fabric or Databricks) are required. Hands-on experience or certification with Microsoft Fabric (preferred DP-600 or DP-700) is also beneficial. Candidates are expected to uphold Crowe's values of Care, Trust, Courage, and Stewardship, which define the organization's ethos. Ethical behavior and integrity are paramount for all individuals at Crowe at all times. Crowe offers a comprehensive benefits package to employees, recognizing that great people are what make a great firm. In an inclusive culture that values diversity, talent is nurtured, and employees have the opportunity to meet regularly with Career Coaches to guide them in their career goals and aspirations. Crowe Horwath IT Services Private Ltd. is a wholly owned subsidiary of Crowe LLP (U.S.A.), a public accounting, consulting, and technology firm with a global presence. As an independent member firm of Crowe Global, one of the largest global accounting networks, Crowe LLP is connected with over 200 independent accounting and advisory firms in more than 130 countries worldwide. Please note that Crowe does not accept unsolicited candidates, referrals, or resumes from any third-party entities. Any submissions without a pre-existing agreement will be considered the property of Crowe, free of charge.,

Posted 2 days ago

Apply

12.0 - 16.0 years

0 Lacs

karnataka

On-site

The position is based at Altimetrik with base locations in Bangalore, Chennai, Pune, Jaipur, Hyderabad, Gurugram. The ideal candidate should be able to join immediately or within 10 days. As a Machine Learning Engineer, your primary responsibility will be to design and implement ML solutions while architecting scalable and efficient systems. You should be proficient in Machine Learning Algorithms, Data Engineering, ETL/ELT processes, data cleaning, preprocessing, EDA, feature engineering, data splitting, and encoding. Additionally, you should have experience in MLOps, including model versioning, training, experimenting, deployment, and monitoring using tools such as Python, Pandas, TensorFlow, PyTorch, Scikit-learn, Keras, XGBoost, LightGBM, Matplotlib, R, Scala, Java, Git, DVC, MLFlow, Kubernetes, Kubeflow, Docker, Containers, CI/CD deployments, Apache Airflow, Databricks, Snowflake, Salesforce, SAP, AWS/Azure/GCP Data Cloud Platforms, AWS SageMaker, Google AI Platform, Azure Machine Learning, model design and optimization, LLMs models (OpenAI, BERT, LLaMA, Gemini, etc.), RDBMS, NoSQL databases, Vector DB, RAG Pipelines, AI Agent Frameworks, AI agent authentication, deployment, AI security and compliance, and Prompt Engineering. Your secondary skills should include cloud computing, data engineering, and DevOps. You will be responsible for designing and developing AI/ML models and algorithms. Collaboration with data scientists and engineers to ensure the scalability and performance of AI/ML systems will be a key part of your role. To be considered for this position, you should have 12-15 years of experience in AI/ML development, strong expertise in AI/ML frameworks and tools, and excellent problem-solving and technical skills.,

Posted 2 days ago

Apply

4.0 - 8.0 years

0 Lacs

thiruvananthapuram, kerala

On-site

As a Senior Data Engineer in the Digital Marketing domain with expertise in Qlik Sense, your primary responsibility will be to design and develop scalable data pipelines, work with cloud data warehousing platforms like Snowflake, and enable data visualization and analytics for various teams. You will play a crucial role in supporting and enhancing data engineering practices, leading migration efforts to Snowflake, integrating data from digital marketing platforms, and collaborating with stakeholders to deliver data solutions that align with business objectives. Your role will also involve ensuring data quality, governance, and security, as well as developing and optimizing Qlik Sense dashboards for efficient visualization of marketing and performance data. To be successful in this role, you must have a minimum of 4 years of experience in data engineering, strong proficiency in SQL and Python, hands-on experience with Snowflake or similar platforms, and familiarity with data orchestration tools like Dagster or Apache Airflow. A solid understanding of data modeling, ETL/ELT workflows, and domain-specific data processing in digital marketing will be essential. Additionally, excellent communication, analytical thinking, problem-solving skills, and the ability to work effectively in a remote, cross-functional team environment are key attributes for this position. If you are passionate about leveraging data to drive business insights and have a proven track record in designing and developing robust data pipelines, this role offers an exciting opportunity to contribute to the success of our organization.,

Posted 3 days ago

Apply

3.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

You have over 10 years of hands-on experience within a data engineering organization, with a specific focus on Informatica IICS (Informatica Intelligent Cloud Services) for at least 5 years. Additionally, you have a minimum of 5 years of experience working with Cloud Data Warehouse Platforms such as Snowflake. Your expertise includes utilizing tools like Spark, Athena, AWS Glue, Python/PySpark, etc. Your role involves developing architectures to transition from legacy to modern data platforms. Previous exposure to ETL/ELT tools, particularly Informatica Power Center in Data Warehouse environments, is essential. You are well-versed in agile software development processes and are familiar with performance metric tools. Your work showcases a keen customer focus and a commitment to delivering high-quality results. You possess strong analytical and problem-solving skills, enabling you to devise innovative and effective solutions. Your ability to operate at a conceptual level and reconcile differing perspectives is a valuable asset. You excel in setting priorities, driving your learning curve, and sharing domain knowledge. Your interpersonal, leadership, and communication skills are exemplary.,

Posted 3 days ago

Apply

5.0 - 9.0 years

0 Lacs

delhi

On-site

We are looking for a highly motivated and enthusiastic Senior Data Scientist with 5-8 years of experience to join our dynamic team. The ideal candidate will have a strong background in AI/ML analytics and a passion for leveraging data to drive business insights and innovation. As a Senior Data Scientist, your key responsibilities will include developing and implementing machine learning models and algorithms, working closely with project stakeholders to understand requirements and translate them into deliverables, utilizing statistical and machine learning techniques to analyze and interpret complex data sets, staying updated with the latest advancements in AI/ML technologies and methodologies, and collaborating with cross-functional teams to support various AI/ML initiatives. To qualify for this position, you should have a Bachelor's degree in Computer Science, Data Science, or a related field, as well as a strong understanding of machine learning, deep learning, and Generative AI concepts. Preferred skills for this role include experience in machine learning techniques such as Regression, Classification, Predictive modeling, Clustering, and Deep Learning stack using Python. Additionally, experience with cloud infrastructure for AI/ML on AWS (Sagemaker, Quicksight, Athena, Glue), expertise in building secure data ingestion pipelines for unstructured data (ETL/ELT), proficiency in Python, TypeScript, NodeJS, ReactJS, and frameworks, experience with data visualization tools, knowledge of deep learning frameworks, experience with version control systems, and strong knowledge and experience in Generative AI/LLM based development. Good to have skills for this position include knowledge and experience in building knowledge graphs in production and understanding of multi-agent systems and their applications in complex problem-solving scenarios. Pentair is an Equal Opportunity Employer, and we believe that a diverse workforce contributes different perspectives and creative ideas that enable us to continue to improve every day.,

Posted 4 days ago

Apply

5.0 - 9.0 years

0 Lacs

coimbatore, tamil nadu

On-site

You are an experienced Data Engineering expert with a focus on Azure Data Factory (ADF). In this role, you will be responsible for designing and implementing end-to-end data solutions using ADF. Your primary tasks will include collaborating with stakeholders to gather requirements and develop robust pipelines that support analytics and business insights. Your key responsibilities will involve designing and implementing complex data pipelines within Azure Data Factory. You will work on integrating data from various sources such as on-prem databases, APIs, and cloud storage. Additionally, you will develop ETL/ELT strategies to manage both structured and unstructured data effectively. Supporting data transformation, cleansing, and enrichment processes will also be a part of your role. Furthermore, you will be required to implement logging, alerting, and monitoring mechanisms for ADF pipelines. Collaboration with architects and business analysts to comprehend data requirements will be crucial. Writing and optimizing complex SQL queries for performance optimization will also be an essential aspect of your responsibilities. To excel in this role, you should possess strong hands-on experience with ADF, specifically in pipeline orchestration and data flows. Experience with Azure Data Lake, Azure Synapse Analytics, and Blob Storage will be beneficial. Proficiency in SQL and performance tuning is a must. Additionally, knowledge of Azure DevOps and CI/CD practices, along with a good understanding of DataOps and Agile environments, will be advantageous. If you meet these qualifications and are excited about this opportunity, please share your resume with us at karthicc@nallas.com. We look forward to potentially working with you in Coimbatore in this hybrid role that offers a stimulating environment for your Data Engineering expertise.,

Posted 4 days ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

We are seeking a SQL Expert with approximately 5 years of practical experience in creating, enhancing, and refining intricate SQL queries, stored procedures, and database solutions. Your primary responsibility will involve supporting our data-driven applications to ensure efficient data processing, performance optimization, and robust database design. In this pivotal role, you will collaborate closely with product, engineering, and analytics teams to deliver high-quality, reliable, and scalable data solutions. Your duties will encompass designing and implementing complex SQL queries, stored procedures, views, and functions, optimizing query performance, developing ETL/ELT pipelines for data processing, and collaborating with developers to devise scalable and normalized database schemas. Your role will also entail analyzing and troubleshooting database performance issues, ensuring data integrity and compliance across systems, creating comprehensive documentation of data models and processes, and providing support to reporting and analytics teams by delivering clean and optimized datasets. The ideal candidate should possess at least 5 years of hands-on experience with SQL (SQL Server, MySQL, PostgreSQL, Oracle, or similar databases), a strong grasp of advanced SQL concepts such as window functions, CTEs, indexing, and query optimization, and experience in crafting and optimizing stored procedures, triggers, and functions. Familiarity with data warehousing concepts, dimensional modeling, ETL processes, and cloud databases (AWS RDS, BigQuery, Snowflake, Azure SQL) is advantageous. Additionally, you should have the ability to diagnose and address database performance issues, work effectively with large and complex datasets to ensure high performance, understand relational database design and normalization principles, and be proficient in tools like SSIS, Talend, Apache Airflow, or similar ETL frameworks. Experience with BI/reporting tools (Power BI, Tableau, Looker), scripting languages (Python, Bash) for data manipulation, and knowledge of NoSQL databases or hybrid data architectures are desirable qualifications. Strong communication, documentation skills, and a collaborative mindset are essential for success in this role.,

Posted 4 days ago

Apply

5.0 - 9.0 years

0 Lacs

coimbatore, tamil nadu

On-site

You will be responsible for leading and managing the entire offshore data engineering team, providing guidance, mentorship, and performance management. Your role will involve fostering a collaborative and high-performing team environment, overseeing resource allocation, project planning, and task prioritization. You will need to conduct regular team meetings and provide progress updates to stakeholders. Your main focus will be on designing and implementing effective database solutions and data models to store and retrieve company data. This will include examining and identifying database structural necessities by evaluating client operations, applications, and programming requirements. You will also need to assess database implementation procedures to ensure compliance with internal and external regulations. In addition, you will be responsible for installing and organizing information systems to guarantee company functionality, preparing accurate database design and architecture reports for management and executive teams, and overseeing the migration of data from legacy systems to new solutions. Monitoring system performance, recommending solutions to improve database systems, and designing and implementing scalable and efficient ETL/ELT processes using Azure Data Factory, Databricks, and other relevant tools will also be part of your role. Furthermore, you will be required to build and maintain data warehouses and data lakes using Azure Synapse, Snowflake, BigQuery, and Azure Data Lake. Developing and optimizing data pipelines for data ingestion, transformation, and loading, implementing data quality checks and validation processes, and supporting data analytics initiatives by providing access to reliable and accurate data will be essential. You will also need to develop data visualizations and dashboards using Power BI and Tableau, collaborate with data scientists and business analysts to translate business requirements into technical solutions, educate staff members through training and individual support, work within an Agile Scrum framework, and utilize IT Service Management best practices. Providing Techno-Functional Management will also be part of your responsibilities. **Mandatory Skills And Expertise:** - Azure SQL - Azure Data Factory - Azure Data Lake - Azure Databricks - Azure Synapse - Snowflake - BigQuery - T-SQL - DBT - ETL/ELT - PySpark - Data Warehousing - Data Analytics - Data Visualization **Qualifications:** - Bachelor's or Master's degree in Computer Science, Data Engineering, or a related field. - Proven experience leading and managing data engineering teams. - Extensive experience in designing and implementing data solutions on Azure and other cloud platforms. - Strong understanding of data warehousing, data modeling, and ETL/ELT concepts. - Proficiency in SQL and scripting languages (Python, PySpark). - Excellent communication, leadership, and problem-solving skills. - Proven experience in client-facing roles.,

Posted 4 days ago

Apply

8.0 - 12.0 years

0 Lacs

maharashtra

On-site

This role is for one of our clients in the Technology, Information, and Media industry at a Mid-Senior level. With a minimum of 8 years of experience, the location for this full-time position is in Mumbai. About The Role We are seeking an accomplished Assistant Vice President - Data Engineering to spearhead our enterprise data engineering function within the broader Data & Analytics leadership team. The ideal candidate will be a hands-on leader with a strategic mindset, capable of architecting modern data ecosystems, leading high-performing teams, and fostering innovation in a cloud-first, analytics-driven environment. Responsibilities Team Leadership & Vision Lead and mentor a team of data engineers, fostering a culture of quality, collaboration, and innovation while shaping the long-term vision for data engineering. Modern Data Infrastructure Design and implement scalable, high-performance data pipelines for batch and real-time workloads using tools like Databricks, PySpark, and Delta Lake, focusing on data lakehouse and data mesh implementations on modern cloud platforms. ETL/ELT & Data Pipeline Management Drive the development of robust ETL/ELT workflows, ensuring ingestion, transformation, cleansing, and enrichment of data from diverse sources while implementing orchestration and monitoring using tools like Airflow, Azure Data Factory, or Prefect. Data Modeling & SQL Optimization Architect logical and physical data models to support advanced analytics and BI use cases, along with writing and reviewing complex SQL queries for performance efficiency and scalability. Data Quality & Governance Collaborate with governance and compliance teams to implement data quality frameworks, lineage tracking, and access controls to ensure alignment with data privacy regulations and security standards. Cross-Functional Collaboration Act as a strategic partner to various stakeholders, translating data requirements into scalable solutions, and effectively communicating data strategy and progress to both technical and non-technical audiences. Innovation & Continuous Improvement Stay updated on emerging technologies in cloud data platforms, streaming, and AI-powered data ops, leading proof-of-concept initiatives, and driving continuous improvement in engineering workflows and infrastructure. Required Experience & Skills The ideal candidate should have 8+ years of hands-on data engineering experience, including 2+ years in a leadership role, deep expertise in Databricks, PySpark, and big data processing frameworks, advanced SQL proficiency, experience building data pipelines on cloud platforms, knowledge of data lakehouse concepts, and strong communication and leadership skills. Preferred Qualifications A Bachelor's or Master's degree in Computer Science, Engineering, Data Science, or related field, industry certifications in relevant platforms, experience with data mesh, streaming architectures, or lakehouse implementations, and exposure to DataOps practices and data product development frameworks would be advantageous.,

Posted 5 days ago

Apply

8.0 - 12.0 years

0 Lacs

chennai, tamil nadu

On-site

The role of a Senior Data Architect specializing in Google Cloud Platform (GCP) in Chennai, TN (Hybrid) involves engaging with clients and demonstrating thought leadership. You will excel in creating insightful content and delivering it effectively to diverse audiences. In terms of Cloud Architecture & Design, your responsibilities will include understanding customers" data platform, business, and IT priorities to design data solutions that drive business value. You will be tasked with architecting, developing, and implementing robust, scalable, and secure end-to-end data engineering, integration, and warehousing solutions within the GCP platform. Additionally, you will assess and validate non-functional attributes to ensure high levels of performance, security, scalability, maintainability, and reliability. As a Technical Leader, you will guide technical teams in best practices related to cloud adoption, migration, and application modernization. Your role will involve providing thought leadership and insights into emerging GCP services while ensuring the long-term technical viability and optimization of cloud deployments. Stakeholder Collaboration is key in this role, as you will work closely with business leaders, developers, and operations teams to align technological solutions with business goals. Furthermore, you will collaborate with prospective and existing customers to implement POCs/MVPs and guide them through deployment, operationalization, and troubleshooting. Innovation and Continuous Improvement are expected, requiring you to stay updated on the latest advancements in GCP services and tools. You will implement innovative solutions to enhance data processing, storage, and analytics efficiency, leveraging AI and machine learning tools where applicable. Documentation is a critical aspect of the role, requiring you to create comprehensive blueprints, architectural diagrams, technical collaterals, assets, and implementation plans for GCP-based solutions. The qualifications for this role include a Bachelor's or Master's degree in Computer Science, Information Technology, or a related field, along with a minimum of 8-10 years of experience in architecture roles, including 3-5 years working with GCP. Technical expertise in GCP data warehousing & engineering services, containerization, API-based microservices architecture, CI/CD pipelines, data modeling, ETL/ELT, data integration, and data warehousing concepts is essential. Proficiency in architectural best practices in cloud, programming skills, and soft skills such as analytical thinking, problem-solving, communication, and leadership are also required. Preferred skills for this position include solution architect and/or data engineer certifications from GCP, experience in BFSI, Healthcare, or Retail domains, familiarity with hybrid cloud environments and multi-cloud strategies, data governance principles, and data visualization tools like Power BI or Tableau.,

Posted 5 days ago

Apply

5.0 - 9.0 years

0 Lacs

chennai, tamil nadu

On-site

OneMagnify is a global performance marketing organization operating at the intersection of brand marketing, technology, and analytics. The core offerings of the company accelerate business growth, enhance real-time results, and differentiate clients from their competitors. Collaborating with clients, OneMagnify designs, implements, and manages marketing and brand strategies using analytical and predictive data models to provide valuable customer insights for increased sales conversion. The Data Engineering team at OneMagnify is dedicated to transforming raw data into actionable insights. As a Databricks Engineer, you will be responsible for architecting, building, and maintaining the data infrastructure on the Databricks Lakehouse Platform. Collaboration with data scientists, analysts, and engineers is crucial to deliver top-notch data solutions that drive the business forward. You should possess a strong engineering approach to address complex technical challenges, demonstrate a commitment to delivering robust and efficient data solutions, and have a deep understanding of cloud-based data technologies and data engineering best practices. Your role will involve developing and implementing scalable data models and data warehousing solutions using Databricks. Key Responsibilities: - Architect, develop, and deploy scalable and reliable data infrastructure and pipelines using Databricks and Spark. - Design and implement data models and data warehousing solutions focusing on performance and scalability. - Optimize data processing frameworks and infrastructure for maximum efficiency and cost-effectiveness. - Collaborate with data scientists and analysts to engineer solutions that meet their data needs. - Implement robust data quality frameworks, monitoring systems, and alerting mechanisms. - Design, build, and maintain efficient ETL/ELT processes. - Integrate Databricks with various data sources, systems, and APIs. - Contribute to defining and implementing data governance, security, and compliance policies. - Stay updated with the latest advancements in Databricks, cloud data engineering standard methodologies, and related technologies. Qualifications: - Bachelor's degree in Computer Science, Engineering, or a related technical field (or equivalent practical experience). - 5+ years of experience in data engineering or a similar role with a focus on building and maintaining data infrastructure. - Deep understanding and practical experience with the Databricks Lakehouse Platform and its core engineering aspects. - Expertise in big data processing frameworks, particularly Apache Spark. - Strong hands-on experience with programming languages like Python (PySpark) and/or Scala. - Proficiency in SQL, data warehousing principles, schema design, and performance tuning. - Experience with cloud platforms such as AWS, Azure, or GCP. - Understanding of data modeling, ETL/ELT architecture, and data quality engineering principles. - Strong problem-solving, analytical, and debugging skills. - Excellent communication and collaboration skills to convey technical concepts to diverse audiences. Benefits: - Comprehensive benefits package including Medical Insurance, PF, Gratuity, paid holidays, and more. OneMagnify fosters a workplace environment where every employee can thrive and achieve their personal best. The company has been consistently recognized as a Top Workplace, Best Workplace, and Cool Workplace in the United States for 10 consecutive years. Recently, OneMagnify was acknowledged as a Top Workplace in India. At OneMagnify, we believe that innovative ideas stem from diverse perspectives. As an equal opportunity employer, we are committed to providing a workplace free of discrimination and intolerance. We actively seek like-minded individuals to join our team and contribute to our mission of moving brands forward through impactful analytics, engaging communications, and innovative technology solutions.,

Posted 5 days ago

Apply

8.0 - 12.0 years

0 Lacs

hyderabad, telangana

On-site

As a Data Engineering Lead, you will play a crucial role in driving the design, development, and delivery of cutting-edge data solutions. Your responsibilities will include overseeing the development of robust data pipelines, leading a team of 5 to 7 engineers, and architecting innovative solutions tailored to client needs. You will be the primary point of contact for US-based clients, ensuring alignment on project goals and engaging with stakeholders to understand requirements. Additionally, you will design and implement end-to-end data pipelines, lead technical project execution, and manage and mentor a team of data engineers. Your role will involve hands-on development, performance tuning, and troubleshooting complex technical issues within data systems. You will be expected to embrace a consulting mindset, stay updated with emerging data technologies, and drive internal initiatives to improve processes. Your qualifications should include a Bachelor's or Master's degree in Computer Science or a related field, along with 8+ years of experience in data engineering and expertise in programming languages such as Python, Scala, or Java. Strong communication, problem-solving, and analytical skills are essential for success in this role. This position requires working five days a week from the office, with no hybrid or remote options available. If you are a dynamic and accomplished Data Engineering Lead looking to make a significant impact in a fast-paced consulting environment, we invite you to apply for this exciting opportunity.,

Posted 6 days ago

Apply

5.0 - 9.0 years

0 Lacs

chennai, tamil nadu

On-site

As a Databricks Architect with 5-8 years of experience, you will be responsible for designing, implementing, and optimizing scalable data architectures leveraging Databricks on public cloud platforms. Your role will involve designing enterprise-grade solutions for batch and real-time data analytics, with hands-on expertise in Databricks, Spark, and cloud-based data services. Your key responsibilities will include designing and developing end-to-end data architectures and pipelines with Databricks, optimizing Databricks performance through efficient data processing, job tuning, and resource management, implementing data security, governance, and best practices, collaborating with stakeholders to deliver integrated data solutions, monitoring the health of the Databricks environment, documenting architectures, and staying current with Databricks and cloud technology best practices. In terms of technical skills, you should have advanced hands-on experience with Databricks deployment, configuration, and administration, expertise in Apache Spark, proficiency in Python and SQL, experience with cloud platforms such as Azure, AWS, or GCP, knowledge of data modeling, exposure to big data tools, familiarity with BI tools, and a strong understanding of data security in cloud environments. To qualify for this role, you should have a bachelor's or master's degree in computer science or a related field, 5-8 years of overall experience in data engineering or architecture, with at least 4+ years working with Databricks and Spark at scale, experience in optimizing and maintaining large data workflows in cloud environments, excellent analytical and communication skills, and preferably a Databricks certification. Desirable attributes for this role include demonstrated leadership abilities, experience in agile development environments, and consulting or customer-facing experience.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

You will be working as a Data Architect at Niveus Solutions, a dynamic organization focused on utilizing data for business growth and decision-making. Your role will be crucial in designing, building, and maintaining robust data platforms on Azure and GCP. As a Senior Data Architect, your responsibilities will include: - Developing and implementing comprehensive data architectures such as data warehouses, data lakes, and data lake houses on Azure and GCP. - Designing data models aligned with business requirements to support efficient data analysis and reporting. - Creating and optimizing ETL/ELT pipelines using tools like Databricks, Azure Data Factory, or GCP Data Fusion. - Designing scalable data warehouses on Azure Synapse Analytics or GCP BigQuery for enterprise reporting and analytics. - Implementing data lakehouses on Azure Databricks or GCP Dataproc for unified data management and analytics. - Utilizing Hadoop components for distributed data processing and analysis. - Establishing data governance policies to ensure data quality, security, and compliance. - Writing scripts in Python, SQL, or Scala to automate data tasks and integrate with other systems. - Demonstrating expertise in Azure and GCP cloud platforms and mentoring junior team members. - Collaborating with stakeholders, data analysts, and developers to deliver effective data solutions. Qualifications required for this role: - Bachelor's degree in Computer Science, Data Science, or related field. - 5+ years of experience in data architecture, data warehousing, and data lakehouse implementation. - Proficiency in Azure and GCP data services, ETL/ELT tools, and Hadoop components. - Strong scripting skills in Python, SQL, and Scala. - Experience in data governance, compliance frameworks, and excellent communication skills. Bonus points for: - Certifications in Azure Data Engineer Associate or GCP Data Engineer. - Experience in real-time data processing, data visualization tools, and cloud-native data platforms. - Knowledge of machine learning and artificial intelligence concepts. If you are a passionate data architect with a successful track record in delivering data solutions, we welcome you to apply and be a part of our data-driven journey at Niveus Solutions.,

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

noida, uttar pradesh

On-site

Are you passionate about building scalable data-driven applications and cloud-native solutions We're looking for a Senior Data Engineer with strong software engineering skills to join our dynamic team. This role blends full-stack development, cloud engineering, and data pipeline expertise to deliver impactful solutions, especially in the healthcare domain. Design, develop, and maintain robust data pipelines (batch & streaming) using cloud-native technologies (GCP preferred). Build and optimize RESTful APIs, WebSockets, and GraphQL endpoints. Develop scalable front-end applications using React, Angular, or Vue.js. Implement secure, high-performance server-side logic using Python, Node.js, or Java. Deploy applications using Docker/Kubernetes on AWS, Azure, or GCP. Collaborate with cross-functional teams to deliver enterprise-grade solutions. Ensure best practices in CI/CD, automated testing, and cloud security (OAuth, SSL, encryption). Work on ETL/ELT workflows, handling structured and semi-structured data (JSON, XML). Contribute to healthcare-focused projects with domain-specific insights. Required Skills & Experience: - 6+ years in SQL, HTML, CSS, JavaScript, and modern front-end frameworks. - 6+ years in Python, Node.js, or Java for backend development. - Strong experience with RESTful APIs, WebSockets, and GraphQL. - Hands-on with Docker, Kubernetes, and cloud platforms (GCP preferred). - Familiarity with CI/CD tools like Jenkins, Git. - Experience in data engineering, ETL/ELT, and cloud-native data processing. - Knowledge of security best practices for web and cloud applications. - Healthcare domain experience is a plus.,

Posted 2 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

pune, maharashtra

On-site

The Applications Development Senior Programmer Analyst position at our organization involves collaborating with the Technology team to establish and implement new or updated application systems and programs. Your primary goal in this role will be to contribute to applications systems analysis and programming activities. As an Applications Development Senior Programmer Analyst, your key responsibilities will include developing high-quality, scalable applications using Java and integrating them with Snowflake for efficient data processing and analytics. You will be tasked with building and optimizing ETL/ELT pipelines in Snowflake to ensure efficient data ingestion, transformation, and storage. Additionally, writing clean, maintainable, and efficient Java code while following best practices and coding standards will be essential. You will design and manage Snowflake schemas, tables, views, and stored procedures to support business applications. Integration of Java applications with Snowflake and other systems using APIs, JDBC, or Snowflake connectors will also be part of your responsibilities. Furthermore, optimizing Snowflake queries and Java application performance to handle large-scale data processing is a key aspect of this role. Collaboration with cross-functional teams, including data engineers, analysts, and business stakeholders, to deliver solutions aligned with business requirements will be crucial. You will be expected to debug and resolve issues in Java applications and Snowflake data workflows to ensure minimal downtime. Additionally, creating and maintaining technical documentation for code, data models, and system architecture will be necessary. The required qualifications for this role include having 8+ years of professional experience in software development, with a strong focus on Java and Snowflake. Proficiency in Java SE/EE, Spring Framework (Spring Boot, Spring MVC), RESTful APIs, microservices architecture, and multi-threading is essential. Familiarity with build tools like Maven or Gradle, version control systems like Git, and hands-on experience with the Snowflake cloud data platform are also required. Expertise in designing and optimizing Snowflake data models, writing complex SQL queries, and knowledge of Snowflake features like Snowpipe, tasks, and streams for real-time data processing is expected. Strong understanding of relational databases, data modeling, and performance optimization techniques is crucial. Strong analytical skills and excellent verbal and written communication skills are necessary to collaborate effectively with technical and non-technical stakeholders. This is a full-time position in the Applications Development job family within the Technology group. If you require a reasonable accommodation to use our search tools and/or apply for a career opportunity due to a disability, please review the Accessibility at Citi information. You can also view Citis EEO Policy Statement and the Know Your Rights poster for further details.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

noida, uttar pradesh

On-site

You will be responsible for leading the design and architecture of highly scalable, robust, and efficient data solutions leveraging Snowflake as the primary data platform. Your role will involve developing and implementing enterprise-level data architecture strategies, blueprints, and roadmaps that align with business objectives and growth. Additionally, you will architect and manage data solutions, optimizing performance for complex analytical workloads and high-volume reporting requirements. Collaboration with cross-functional teams (e.g., analytics, engineering, business stakeholders) will be essential to deeply understand business needs and translate them into well-defined, robust data architectures and technical specifications. As part of your responsibilities, you will design, implement, and optimize end-to-end data pipelines, data warehouses, and data lakes, ensuring efficient, reliable, and automated data ingestion, transformation (ELT/ETL), and loading processes into Snowflake. You will also be expected to develop and maintain advanced SQL queries, stored procedures, and data models within Snowflake for complex data manipulation and analysis, ensuring data quality, consistency, and integrity across all data solutions. In terms of data governance and security, you will develop and enforce best practices for data governance, data security, access control, auditing, and compliance within the cloud-based data environment (Snowflake). Implementing data masking, encryption, and other security measures to protect sensitive data will be a critical aspect of your role. Your responsibilities will also include evaluating, recommending, and integrating third-party tools, technologies, and services to enhance the Snowflake ecosystem, optimize data workflows, and support the overall data strategy. It will be important to stay updated on new Snowflake features, industry trends, and data technologies, recommending their adoption where beneficial. Mentorship and collaboration will be key aspects of your role as you provide technical leadership and mentorship to data engineers and other team members, fostering a culture of best practices and continuous improvement. Effective communication and collaboration across global teams will be essential for success in this position. To qualify for this role, you should have 5-7+ years of progressive experience in data architecture and data engineering roles. Proven expertise in Snowflake, advanced SQL, data warehousing/lake design, ETL/ELT processes, and cloud data platforms will be required. Strong analytical, problem-solving, and critical thinking skills, along with excellent communication abilities, are essential. The ability to work independently, manage multiple priorities, and deliver high-quality results in a remote setting is crucial. Candidates who can overlap working hours until at least 10:00 AM CST (US Central Time) and those who are immediately available or have a very short notice period are strongly preferred. In terms of education, a Bachelor's degree in Computer Science, Engineering, Information Technology, or a related quantitative field is required, while a Master's degree is a plus.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

noida, uttar pradesh

On-site

At Capgemini Invent, we believe difference drives change. As inventive transformation consultants, we blend our strategic, creative, and scientific capabilities, collaborating closely with clients to deliver cutting-edge solutions. Join us to drive transformation tailored to our client's challenges of today and tomorrow, informed and validated by science and data, superpowered by creativity and design, all underpinned by technology created with purpose. Your role involves having IT experience with a minimum of 5+ years in creating data warehouses, data lakes, ETL/ELT, data pipelines on cloud. You should have experience in data pipeline implementation with cloud providers such as AWS, Azure, GCP, preferably in the Life Sciences Domain. Experience with cloud storage, cloud database, cloud Data Warehousing, and Data Lake solutions like Snowflake, BigQuery, AWS Redshift, ADLS, S3 is essential. You should also be familiar with cloud data integration services for structured, semi-structured, and unstructured data like Azure Databricks, Azure Data Factory, Azure Synapse Analytics, AWS Glue, AWS EMR, Dataflow, Dataproc. Good knowledge of Infra capacity sizing, costing of cloud services to drive optimized solution architecture, leading to optimal infra investment vs performance and scaling is required. Your profile should demonstrate the ability to contribute to making architectural choices using various cloud services and solution methodologies. Expertise in programming using Python is a must. Very good knowledge of cloud DevOps practices such as infrastructure as code, CI/CD components, and automated deployments on the cloud is essential. Understanding networking, security, design principles, and best practices in the cloud is expected. Knowledge of IoT and real-time streaming would be an added advantage. You will be leading architectural/technical discussions with clients and should possess excellent communication and presentation skills. At Capgemini, we recognize the significance of flexible work arrangements to provide support. Whether it's remote work or flexible work hours, you will get an environment to maintain a healthy work-life balance. Our mission is centered on your career growth, offering an array of career growth programs and diverse professions crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. Capgemini is a global business and technology transformation partner, helping organizations accelerate their dual transition to a digital and sustainable world while creating tangible impact for enterprises and society. With a responsible and diverse group of over 340,000 team members in more than 50 countries, Capgemini has a strong heritage of over 55 years. Clients trust Capgemini to unlock the value of technology to address the entire breadth of their business needs, delivering end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by market-leading capabilities in AI, Generative AI, cloud, and data, combined with deep industry expertise and a partner ecosystem.,

Posted 3 weeks ago

Apply

7.0 - 12.0 years

0 Lacs

pune, maharashtra

On-site

As a GCP DBT Manager, your primary responsibility will be to collaborate with the team in designing, building, and maintaining data pipelines and transformations using Google Cloud Platform (GCP) and the Data Build Tool (dbt). This role will involve utilizing tools such as BigQuery, Cloud Composer, and Python, requiring a strong foundation in SQL skills and knowledge of data warehousing concepts. Additionally, you will play a crucial role in ensuring data quality, optimizing performance, and working closely with cross-functional teams. Your key responsibilities will include: Data Pipeline Development: - Designing, building, and maintaining ETL/ELT pipelines using dbt and GCP services like BigQuery and Cloud Composer. Data Modeling: - Creating and managing data models and transformations with dbt to ensure efficient and accurate data consumption for analytics and reporting. Data Quality: - Developing and maintaining a data quality framework, including automated testing and cross-dataset validation. Performance Optimization: - Writing and optimizing SQL queries to enhance data processing efficiency within BigQuery. Collaboration: - Collaborating with data engineers, analysts, scientists, and business stakeholders to deliver effective data solutions. Incident Resolution: - Providing support for day-to-day incident and ticket resolution related to data pipelines. Documentation: - Creating and maintaining comprehensive documentation for data pipelines, configurations, and procedures. Cloud Platform Expertise: - Leveraging GCP services like BigQuery, Cloud Composer, Cloud Functions, etc. for efficient data operations. Scripting: - Developing and maintaining SQL/Python scripts for data ingestion, transformation, and automation tasks. Preferred Candidate Profile: Requirements: - 7~12 years of experience in data engineering or a related field. - Strong hands-on experience with Google Cloud Platform (GCP) services, particularly BigQuery. - Proficiency in using dbt for data transformation, testing, and documentation. - Advanced SQL skills for data modeling, performance optimization, and querying large datasets. - Understanding of data warehousing concepts, dimensional modeling, and star schema design. - Experience with ETL/ELT tools and frameworks, such as Apache Beam, Cloud Dataflow, Data Fusion, or Airflow/Composer. In this role, you will be at the forefront of data pipeline development and maintenance, ensuring data quality, performance optimization, and effective collaboration across teams to deliver impactful data solutions using GCP and dbt.,

Posted 3 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

You will be joining Birlasoft, a leading organization at the forefront of merging domain expertise, enterprise solutions, and digital technologies to redefine business outcomes. Emphasizing a consultative and design thinking approach, we drive societal progress by empowering customers to operate businesses with unparalleled efficiency and innovation. As a part of the esteemed multibillion-dollar CKA Birla Group, Birlasoft, comprising a dedicated team of over 12,500 professionals, is dedicated to upholding the Group's distinguished 162-year legacy. At our foundation, we prioritize Diversity, Equity, and Inclusion (DEI) practices, coupled with Corporate Sustainable Responsibility (CSR) initiatives, demonstrating our dedication to constructing not only businesses but also inclusive and sustainable communities. Come join us in shaping a future where technology seamlessly aligns with purpose. We are currently looking for a skilled and proactive StreamSets or Denodo Platform Administrator to manage and enhance our enterprise data engineering and analytics platforms. This position requires hands-on expertise in overseeing large-scale Snowflake data warehouses and StreamSets data pipelines, with a focus on robust troubleshooting, automation, and monitoring capabilities. The ideal candidate will ensure platform reliability, performance, security, and compliance while collaborating closely with various teams such as data engineers, DevOps, and support teams. The role will be based in Pune, Hyderabad, Noida, or Bengaluru, and requires a minimum of 5 years of experience. Key Requirements: - Bachelors or masters in computer science, IT, or related field (B.Tech. / MCA preferred). - Minimum of 3 years of hands-on experience in Snowflake administration. - 5+ years of experience managing StreamSets pipelines in enterprise-grade environments. - Strong familiarity with AWS services, particularly S3, IAM, Lambda, and EC2. - Working knowledge of ServiceNow, Jira, Git, Grafana, and Denodo. - Understanding of data modeling, ETL/ELT best practices, and modern data platform architectures. - Experience with DataOps, DevSecOps, and cloud-native deployment principles is advantageous. - Certification in Snowflake or AWS is highly desirable. If you possess the required qualifications and are passionate about leveraging your expertise in platform administration to drive impactful business outcomes, we invite you to apply and be part of our dynamic team at Birlasoft.,

Posted 4 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

YASH Technologies is a leading technology integrator specializing in helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation. We are a cluster of the brightest stars working with cutting-edge technologies. Our purpose is anchored in a single truth - bringing real positive changes in an increasingly virtual world and it drives us beyond generational gaps and disruptions of the future. We are looking forward to hiring Hadoop Professionals in the following areas: **Position Title:** Data Engineer **SCOPE OF RESPONSIBILITY:** As part of a global, growing team of data engineers, you will collaborate in a DevOps model to enable Clients Life Science business with cutting-edge technology, leveraging data as an asset to support better decision-making. You will design, develop, test, and support automated end-to-end data pipelines and applications within Life Sciences data management and analytics platform (Palantir Foundry, Hadoop, and other components). This position requires proficiency in data engineering, distributed computation, and DevOps methodologies, utilizing AWS infrastructure and on-premises data centers to support multiple technology stacks. **PURPOSE OF THE POSITION:** The purpose of this role is to build and maintain data pipelines, develop applications on various platforms, and support data-driven decision-making processes across clients Life Science business. You will work closely with cross-functional teams, including business users, data scientists, and data analysts, while ensuring the best balance between technical feasibility and business requirements. **RESPONSIBILITIES:** - Develop data pipelines by ingesting various structured and unstructured data sources into Palantir Foundry. - Participate in end-to-end project lifecycles, from requirements analysis to deployment and operations. - Act as a business analyst for developing requirements related to Foundry pipelines. - Review code developed by other data engineers, ensuring adherence to platform standards and functional specifications. - Document technical work professionally and create high-quality technical documentation. - Balance technical feasibility with strict business requirements. - Deploy applications on Foundry platform infrastructure with clearly defined checks. - Implement changes and bug fixes following clients change management framework. - Work in DevOps project setups following Agile principles (e.g., Scrum). - Act as third-level support for critical applications, resolving complex incidents and debugging problems across the full stack. - Work closely with business users, data scientists, and analysts to design physical data models. - Provide support in designing ETL/ELT processes with databases and Hadoop platforms. **EDUCATION:** Bachelors degree or higher in Computer Science, Engineering, Mathematics, Physical Sciences, or related fields. **EXPERIENCE:** 5+ years of experience in system engineering or software development. 3+ years of experience in engineering with a focus on ETL work involving databases and Hadoop platforms. **TECHNICAL SKILLS:** - Hadoop General: Deep knowledge of distributed file system concepts, map-reduce principles, and distributed computing. Familiarity with Spark and its differences from MapReduce. - Data Management: Proficient in technical data management tasks such as reading, transforming, and storing data, including experience with XML/JSON and REST APIs. - Spark: Experience in launching Spark jobs in both client and cluster modes, with an understanding of property settings that impact performance. - Application Development: Familiarity with HTML, CSS, JavaScript, and basic visual design competencies. - SCC/Git: Experienced in using source code control systems like Git. - ETL/ELT: Experience developing ETL/ELT processes, including loading data from enterprise-level RDBMS systems (e.g., Oracle, DB2, MySQL). - Authorization: Basic understanding of user authorization, preferably with Apache Ranger. - Programming: Proficient in Python, with expertise in at least one high-level language (e.g., Java, C, Scala). Must have experience using REST APIs. - SQL: Expertise in SQL for manipulating database data, including views, functions, stored procedures, and exception handling. - AWS: General knowledge of the AWS stack (EC2, S3, EBS, etc.). - IT Process Compliance: Experience with SDLC processes, change control, and ITIL (incident, problem, and change management). **REQUIRED SKILLS:** - Strong problem-solving skills with an analytical mindset. - Excellent communication skills to collaborate with both technical and non-technical teams. - Experience working in Agile/DevOps teams, utilizing Scrum principles. - Ability to thrive in a fast-paced, dynamic environment while managing multiple tasks. - Strong organizational skills with attention to detail. At YASH, you are empowered to create a career that will take you to where you want to go while working in an inclusive team environment. We leverage career-oriented skilling models and optimize our collective intelligence aided with technology for continuous learning, unlearning, and relearning at a rapid pace and scale. Our Hyperlearning workplace is grounded upon four principles - Flexible work arrangements, Free spirit, and emotional positivity; Agile self-determination, trust, transparency, and open collaboration; All Support needed for the realization of business goals; Stable employment with a great atmosphere and ethical corporate culture.,

Posted 1 month ago

Apply

5.0 - 10.0 years

0 Lacs

karnataka

On-site

As a Data Architect at Cigna International Markets, your primary responsibility is to define commercially aware and technically astute solutions that align with the architectural direction while considering project delivery constraints. You will be an integral part of the Architecture function, collaborating with senior stakeholders to establish strategic direction and ensure that business solutions reflect this intent. Your role will involve leading and defining effective business solutions within complex project environments, showcasing the ability to cultivate strong relationships across Business, IT, and 3rd Party stakeholders. Your main duties and responsibilities will include performing key enterprise-wide Data Architecture tasks within International Markets, particularly focusing on on-premise and cloud solution deployments. You will engage proactively with various stakeholders to ensure that business investments result in cost-effective and suitable data-driven solutions. Additionally, you will assist sponsors in creating compelling business cases for change and work with Solution Architects to define data solution designs that meet business and operational expectations. As a Data Architect, you will own and manage data models and design artifacts, offering guidance on best practices and standards for customer-centric data delivery and management. You will advocate for data-driven design within an agile delivery framework and actively participate in the full project lifecycle, from shaping estimates to governing solutions during development. Furthermore, you will be responsible for identifying and managing risks, issues, and assumptions throughout the project lifecycle and play a lead role in selecting 3rd Party solutions. Your skills and experience should include a minimum of 10 years in IT with 5 years in a Data Architecture or Data Design role. You should have experience leading data design projects and delivering significant assets to organizations such as Data Warehouse, Data Lake, or Customer 360 Data Platform. Proficiency in various data capabilities like data modeling, database design, data migration, and data integration (ETL/ELT and data streaming) is essential. Familiarity with toolsets and platforms like AWS, SQL Server, Qlik, and Collibra is preferred. A successful track record of working in globally dispersed teams, technical acumen across different domains, and a collaborative mindset are desirable attributes. Your commercial awareness, financial planning skills, and ability to work with diverse stakeholders to achieve mutually beneficial solutions will be crucial in this role. Join Cigna Healthcare, a division of The Cigna Group, and contribute to our mission of advocating for better health and improving lives.,

Posted 1 month ago

Apply
Page 1 of 3
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies