Jobs
Interviews

5844 Data Warehousing Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a Senior Data Engineer at our Bangalore office, you will play a crucial role in developing data pipeline solutions to meet business data needs. Your responsibilities will involve designing, implementing, and maintaining structured and semi-structured data models, utilizing Python and SQL for data collection, enrichment, and cleansing. Additionally, you will create data APIs in Python Flask containers, leverage AI for analytics, and build data visualizations and dashboards using Tableau. Your expertise in infrastructure as code (Terraform) and executing automated deployment processes will be vital for optimizing solutions for costs and performance. You will collaborate with business analysts to gather stakeholder requirements and translate them into detailed technical specifications. Furthermore, you will be expected to stay updated on the latest technical advancements, particularly in the field of GenAI, and recommend changes based on the evolving landscape of Data Engineering and AI. Your ability to embrace change, share knowledge with team members, and continuously learn will be essential for success in this role. To qualify for this position, you should have at least 5 years of experience in data engineering, with a focus on Python programming, data pipeline development, and API design. Proficiency in SQL, hands-on experience with Docker, and familiarity with various relational and NoSQL databases are required. Strong knowledge of data warehousing concepts, ETL processes, and data modeling techniques is crucial, along with excellent problem-solving skills and attention to detail. Experience with cloud-based data storage and processing platforms like AWS, GCP, or Azure is preferred. Bonus skills such as being a GenAI prompt engineer, proficiency in Machine Learning technologies like TensorFlow or PyTorch, knowledge of big data technologies, and experience with data visualization tools like Tableau, Power BI, or Looker will be advantageous. Familiarity with Pandas, Spacy, NLP libraries, agile development methodologies, and optimizing data pipelines for costs and performance are also desirable. Effective communication and collaboration skills in English are essential for interacting with technical and non-technical stakeholders. You should be able to translate complex ideas into simple examples to ensure clear understanding among team members. A bachelor's degree in computer science, IT, engineering, or a related field is required, along with relevant certifications in BI, AI, data engineering, or data visualization tools. The role will be based at The Leela Office on Airport Road, Kodihalli, Bangalore, with a hybrid work schedule allowing you to work from the office on Tuesdays, Wednesdays, Thursdays, and from home on Mondays and Fridays. If you are passionate about turning complex data into valuable insights and have experience in mentoring junior members and collaborating with peers, we encourage you to apply for this exciting opportunity.,

Posted 1 day ago

Apply

3.0 - 7.0 years

0 Lacs

noida, uttar pradesh

On-site

As an Azure Data Engineer strong in Azure Synapse, you will be responsible for designing, developing, and maintaining data pipelines using Azure Data Factory (ADF) and Synapse. Your role will involve working with SQL databases to optimize queries and ensure efficient data processing. Additionally, you will develop and manage data warehousing solutions to support analytics and reporting, providing production support for data pipelines and reports. You will collaborate with stakeholders to understand business requirements and translate them into scalable data solutions. It will be crucial for you to ensure data quality, integrity, and governance across all data pipelines. Staying updated with industry best practices and emerging technologies in data engineering will also be part of your responsibilities. To excel in this role, you should have 3-5 years of experience in Data Engineering with a focus on Azure technologies. Hands-on experience with Azure Data Factory (ADF), Synapse, and Data Warehousing is essential. Strong expertise in SQL development, query optimization, and database performance tuning is required. You should possess experience providing production support for data pipelines and reports, along with strong problem-solving skills and the ability to work independently. Preferred qualifications include experience with Power BI, Power Query, and report development, knowledge of data security best practices, exposure to Jet Analytics, and familiarity with CI/CD for data pipelines. Joining us will offer you the opportunity to work on cutting-edge Azure Data Engineering projects in a collaborative work environment with a global team. You can expect potential for long-term engagement and career growth, along with competitive compensation based on experience.,

Posted 1 day ago

Apply

8.0 - 12.0 years

0 Lacs

kochi, kerala

On-site

The ideal candidate for the role of Data Architect should have at least 8+ years of experience in Modern Data Architecture, RDBMS, ETL, NoSQL, Data warehousing, Data Governance, Data Modeling, and Performance Optimization, along with proficiency in Azure/AWS/GCP. Primary skills include defining architecture & end-to-end development of Database/ETL/Data Governance processes. It is essential for the candidate to possess technical leadership skills and provide mentorship to junior team members. The candidate must have hands-on experience in 3 to 4 end-to-end projects involving Modern Data Architecture and Data Governance. Responsibilities include defining the architecture for Data engineering projects and Data Governance systems, designing, developing, and supporting Data Integration applications using Azure/AWS/GCP Cloud platforms, and implementing performance optimization techniques. Proficiency in advanced SQL and experience in modeling/designing transactional and DWH databases is required. Adherence to ISMS policies and procedures is mandatory. Good to have skills include Python, Pyspark, and Power BI. The candidate is expected to onboard by 15/01/2025 and possess a Bachelor's Degree qualification. The role entails ensuring the performance of all duties in accordance with the company's policies and procedures.,

Posted 1 day ago

Apply

6.0 - 10.0 years

0 Lacs

delhi

On-site

The client, a leading MNC, specializes in technology consulting and digital solutions for global enterprises. With a vast workforce of over 145,000 professionals across 90+ countries, they cater to 1100+ clients in various industries. The company offers a comprehensive range of services including consulting, IT solutions, enterprise applications, business processes, engineering, network services, customer experience, AI & analytics, and cloud infrastructure services. Notably, they have been recognized for their commitment to sustainability with the Terra Carta Seal, showcasing their dedication to building a climate and nature-positive future. As a Data Engineer with a minimum of 6 years of experience, you will be responsible for constructing and managing data pipelines. The ideal candidate should possess expertise in Databricks, AWS/Azure, and data storage technologies such as databases and distributed file systems. Familiarity with the Spark framework is essential, and prior experience in the retail sector would be advantageous. Key Responsibilities: - Design, develop, and maintain scalable ETL pipelines for processing large data volumes from diverse sources. - Implement and oversee data integration solutions utilizing tools like Databricks, Snowflake, and other relevant technologies. - Develop and optimize data models and schemas to support analytical and reporting requirements. - Write efficient and sustainable Python code for data processing and transformations. - Utilize Apache Spark for distributed data processing and large-scale analytics. - Translate business needs into technical solutions. - Ensure data quality and integrity through rigorous unit testing. - Collaborate with cross-functional teams to integrate data pipelines with other systems. Technical Requirements: - Proficiency in Databricks for data integration and processing. - Experience with ETL tools and processes. - Strong Python programming skills with Apache Spark, emphasizing data processing and automation. - Solid SQL skills and familiarity with relational databases. - Understanding of data warehousing concepts and best practices. - Exposure to cloud platforms such as AWS and Azure. - Hands-on troubleshooting ability and problem-solving skills for complex data issues. - Practical experience with Snowflake.,

Posted 1 day ago

Apply

2.0 - 10.0 years

0 Lacs

pune, maharashtra

On-site

As a Principal Data Engineer at Brillio, you will play a key role in leveraging your expertise in Data Modeling, particularly with tools like ER Studio and ER Win. Brillio, known for its digital technology services and partnership with Fortune 1000 companies, is committed to transforming disruptions into competitive advantages through innovative digital solutions. With over 10 years of IT experience and at least 2 years of hands-on experience in Snowflake, you will be responsible for building and maintaining data models that support the organization's Data Lake/ODS, ETL processes, and data warehousing needs. Your ability to collaborate closely with clients to deliver physical and logical model solutions will be critical to the success of various projects. In this role, you will demonstrate your expertise in Data Modeling concepts at an advanced level, with a focus on modeling in large volume-based environments. Your experience with tools like ER Studio and your overall understanding of database technologies, data warehouses, and analytics will be essential in designing and implementing effective data models. Additionally, your strong skills in Entity Relationship Modeling, knowledge of database design and administration, and proficiency in SQL query development will enable you to contribute to the design and optimization of data structures, including Star Schema design. Your leadership abilities and excellent communication skills will be instrumental in leading teams and ensuring the successful implementation of data modeling solutions. While experience with AWS ecosystems is a plus, your dedication to staying at the forefront of technological advancements and your passion for delivering exceptional client experiences will make you a valuable addition to Brillio's team of "Brillians." Join us in our mission to create innovative digital solutions and make a difference in the world of technology.,

Posted 1 day ago

Apply

5.0 - 9.0 years

0 Lacs

chennai, tamil nadu

On-site

As a Consultant - Data Engineer at AstraZeneca, you will have the opportunity to contribute to the discovery, development, and commercialization of life-changing medicines by enhancing data platforms built on AWS services. Located at Chennai GITC, you will collaborate with experienced engineers to design and implement efficient data products, supporting data platform initiatives with a focus on impacting patients and saving lives. Your key accountabilities as a Data Engineer will include: Technical Expertise: - Designing, developing, and implementing scalable processes to extract, transform, and load data from various sources into data warehouses. - Demonstrating expert understanding of AstraZeneca's implementation of data products, managing SQL queries and procedures for optimal performance. - Providing support on production issues and enhancements through JIRA. Quality Engineering Standards: - Monitoring and optimizing data pipelines, troubleshooting issues, and maintaining quality standards in design, code, and data models. - Offering detailed analysis and documentation of processes and flows as needed. Collaboration: - Working closely with data engineers to understand data sources, transformations, and dependencies thoroughly. - Collaborating with cross-functional teams to ensure seamless data integration and reliability. Innovation and Process Improvement: - Driving the adoption of new technologies and tools to enhance data engineering processes and efficiency. - Recommending and implementing enhancements to improve reliability, efficiency, and quality of data processing pipelines. To be successful in this role, you should have: - A Bachelor's degree in Computer Science, Information Technology, or a related field. - Strong experience with SQL, warehousing, and building ETL pipelines. - Proficiency in working with column-level databases like Redshift, Cassandra, BigQuery. - Deep SQL knowledge for data extraction, transformation, and reporting. - Excellent communication skills for effective collaboration with technical and non-technical stakeholders. - Strong analytical skills to troubleshoot and deliver solutions in complex data environments. - Experience with Agile Development techniques and methodologies. Desirable skills and experience include knowledge of Databricks/Snowflake, proficiency in scripting and programming languages like Python, experience with reporting tools such as PowerBI, and prior experience in Pharmaceutical or Healthcare industry IT environments. Join AstraZeneca's dynamic team to drive cross-company change and disrupt the industry while making a direct impact on patients through innovative data solutions and technologies. Apply now to be part of our ambitious journey towards becoming a digital and data-led enterprise.,

Posted 1 day ago

Apply

4.0 - 8.0 years

0 Lacs

pune, maharashtra

On-site

As a Data Quality Engineer, your primary responsibility will be to analyze business and technical requirements to design, develop, and execute comprehensive test plans for ETL pipelines and data transformations. You will perform data validation, reconciliation, and integrity checks across various data sources and target systems. Additionally, you will be expected to build and automate data quality checks using SQL and/or Python scripting. It will be your duty to identify, document, and track data quality issues, anomalies, and defects. Collaboration is key in this role, as you will work closely with data engineers, developers, QA, and business stakeholders to understand data requirements and ensure that data quality standards are met. You will define data quality KPIs and implement continuous monitoring frameworks. Participation in data model reviews and providing input on data quality considerations will also be part of your responsibilities. In case of data discrepancies, you will be expected to perform root cause analysis and work with teams to drive resolution. Ensuring alignment to data governance policies, standards, and best practices will also fall under your purview. To qualify for this position, you should hold a Bachelor's degree in Computer Science, Information Technology, or a related field. Additionally, you should have 4 to 7 years of experience as a Data Quality Engineer, ETL Tester, or a similar role. A strong understanding of ETL concepts, data warehousing principles, and relational database design is essential. Proficiency in SQL for complex querying, data profiling, and validation tasks is required. Familiarity with data quality tools, testing methodologies, and modern cloud data ecosystems (AWS, Snowflake, Apache Spark, Redshift) will be advantageous. Moreover, advanced knowledge of SQL, data pipeline tools like Airflow, DBT, or Informatica, as well as experience with integrating data validation processes into CI/CD pipelines using tools like GitHub Actions, Jenkins, or similar, are desired qualifications. An understanding of big data platforms, data lakes, non-relational databases, data lineage, master data management (MDM) concepts, and experience with Agile/Scrum development methodologies will be beneficial for excelling in this role. Your excellent analytical and problem-solving skills along with a strong attention to detail will be valuable assets in fulfilling the responsibilities of a Data Quality Engineer.,

Posted 1 day ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

Join us as an ETL Developer at Barclays, where you will be responsible for supporting the successful delivery of Location Strategy projects to plan, budget, agreed quality, and governance standards. Spearheading the evolution of our digital landscape, you will drive innovation and excellence using cutting-edge technology to revolutionize our digital offerings and ensure unparalleled customer experiences. To be successful in this role, you should have experience with: - Good knowledge of Python - Extensive hands-on PySpark - Understanding of Data Warehousing concepts - Strong SQL knowledge - Proficiency in Bigdata technologies (HDFS) - Exposure to AWS working environment Additionally, highly valued skills may include: - Working knowledge of AWS - Familiarity with Bigdata As an ETL Developer at Barclays, you may be assessed on key critical skills relevant for success in this role, such as risk and controls, change and transformation, business acumen, strategic thinking, and digital and technology, in addition to job-specific technical skills. This role is based in Pune. Purpose of the role: To build and maintain systems that collect, store, process, and analyze data, including data pipelines, data warehouses, and data lakes, ensuring accuracy, accessibility, and security of all data. Accountabilities: - Building and maintaining data architecture pipelines for transfer and processing of durable, complete, and consistent data - Designing and implementing data warehouses and data lakes to manage data volumes, velocity, and security measures - Developing processing and analysis algorithms suitable for data complexity and volumes - Collaborating with data scientists to build and deploy machine learning models Analyst Expectations: - Impacting the work of related teams within the area - Partnering with other functions and business areas - Taking responsibility for end results of team's operational processing and activities - Escalating policy/procedure breaches appropriately - Embedding new policies/procedures for risk mitigation - Advising and influencing decision-making within expertise area - Managing risk and strengthening controls in work areas - Delivering work in line with relevant rules, regulations, and codes of conduct - Building an understanding of integration with function and organization's products, services, and processes - Demonstrating how areas contribute to achieving organization objectives - Resolving problems and selecting solutions based on technical experience - Guiding and persuading team members and communicating complex/sensitive information - Acting as a contact point for stakeholders outside the function and building a network of contacts All colleagues are expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence, and Stewardship, as well as the Barclays Mindset to Empower, Challenge, and Drive.,

Posted 1 day ago

Apply

6.0 - 10.0 years

0 - 0 Lacs

hyderabad, telangana

On-site

You will be joining QTek Digital, a leading data solutions provider known for its expertise in custom data management, data warehouse, and data science solutions. Our team of dedicated data professionals, including data scientists, data analysts, and data engineers, collaborates to address present-day challenges and pave the way for future innovations. At QTek Digital, we value our employees and focus on fostering engagement, empowerment, and continuous growth opportunities. As a BI ETL Engineer at QTek Digital, you will be taking on a full-time remote position. Your primary responsibilities will revolve around tasks such as data modeling, applying analytical skills, implementing data warehouse solutions, and managing Extract, Transform, Load (ETL) processes. This role demands strong problem-solving capabilities and the capacity to work autonomously. To excel in this role, you should ideally possess: - 6-9 years of hands-on experience in ETL and ELT pipeline development using tools like Pentaho, SSIS, FiveTran, Airbyte, or similar platforms. - 6-8 years of practical experience in SQL and other data manipulation languages. - Proficiency in Data Modeling, Dashboard creation, and Analytics. - Sound knowledge of data warehousing principles, particularly Kimball design. - Bonus points for familiarity with Pentaho and Airbyte administration. - Demonstrated expertise in Data Modeling, Dashboard design, Analytics, Data Warehousing, and ETL procedures. - Strong troubleshooting and problem-solving skills. - Effective communication and collaboration abilities. - Capability to operate both independently and as part of a team. - A Bachelor's degree in Computer Science, Information Systems, or a related field. This position is based in our Hyderabad office, offering an attractive compensation package ranging from INR 5-19 Lakhs, depending on various factors such as your skills and prior experience. Join us at QTek Digital and be part of a dynamic team dedicated to shaping the future of data solutions.,

Posted 1 day ago

Apply

3.0 - 7.0 years

0 Lacs

tiruchirappalli, tamil nadu

On-site

INFOC is currently looking for a skilled PowerBI Data Analyst to be a part of the Data Analytics team. The ideal candidate should possess a solid foundation in data analysis and visualization, coupled with an expert-level proficiency in PowerBI. In this role, you will be responsible for converting data into actionable insights that drive strategic decisions and enhance business outcomes. Collaborating closely with stakeholders throughout the organization, you will comprehend their data requirements and produce engaging visualizations and dashboards that narrate the story concealed within the data. Your main responsibilities will include the development and upkeep of PowerBI dashboards and reports that offer perceptive and actionable analytics across diverse business units. Working alongside business stakeholders, you will ascertain their data analysis needs and provide solutions that cater to those requirements. Furthermore, you will be responsible for ETL processes, ensuring the accuracy and reliability of data imported from various sources into PowerBI. By implementing data modeling, data cleansing, and enrichment techniques, you will enhance the quality and effectiveness of data analysis. Additionally, you will conduct ad-hoc analyses and present findings to non-technical stakeholders in a clear and understandable manner. To qualify for this role, you should hold a Bachelors or Masters degree in Computer Science, Data Science, Information Technology, or a related field. A proven track record as a Data Analyst, Business Intelligence Analyst, or similar role, with a strong emphasis on PowerBI, is required. Proficiency in PowerBI, encompassing data modeling, DAX, and custom visuals, is essential. A sound understanding of SQL and experience with database technologies is necessary. Familiarity with data preparation, data gateway, and data warehousing concepts is advantageous. Strong analytical and problem-solving skills are crucial, along with excellent communication and interpersonal abilities. You should be capable of translating complex data into actionable insights for individuals at all levels within the organization. Stay abreast of the latest trends and advancements in data analytics and PowerBI capabilities to continually enhance data analysis processes and tools.,

Posted 1 day ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

YASH Technologies is a leading technology integrator specializing in helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation. At YASH, we are a cluster of the brightest stars working with cutting-edge technologies. Our purpose is anchored in a single truth - bringing real positive changes in an increasingly virtual world and it drives us beyond generational gaps and disruptions of the future. We are looking forward to hiring ETL (Extract, Transform, Load) Professionals with the following requirements: **Experience:** 8-10 Years **Job Description:** - 8 to 10 years of experience in designing and developing reliable solutions. - Ability to work with business partners and provide long-lasting solutions. - Minimum 5 years of experience in Snowflake. - Strong knowledge in Any ETL, Data Modeling, and Data Warehousing. - Minimum 2 years of work experience on Data Vault modeling. - Strong knowledge in SQL, PL/SQL, and RDBMS. - Domain knowledge in Manufacturing / Supply chain / Sales / Finance areas. - Good to have Snaplogic knowledge or project experience. - Good to have cloud platform knowledge AWS or Azure. - Good to have knowledge in Python/Pyspark. - Experience in Data migration / Modernization projects. - Zeal to pick up new technologies and do POCs. - Ability to lead a team to deliver the expected business results. - Good analytical and strong troubleshooting skills. - Excellent communication and strong interpersonal skills. At YASH, you are empowered to create a career that will take you to where you want to go while working in an inclusive team environment. We leverage career-oriented skilling models and optimize our collective intelligence aided with technology for continuous learning, unlearning, and relearning at a rapid pace and scale. Our Hyperlearning workplace is grounded upon four principles: - Flexible work arrangements, Free spirit, and emotional positivity. - Agile self-determination, trust, transparency, and open collaboration. - All Support needed for the realization of business goals. - Stable employment with a great atmosphere and ethical corporate culture.,

Posted 1 day ago

Apply

2.0 - 6.0 years

0 Lacs

pune, maharashtra

On-site

As a Data Engineer at Bridgnext, you will be responsible for working on internal and customer-based projects. Your primary focus will be on ensuring the quality of the code and providing optimal solutions to meet client requirements while anticipating their future needs based on market understanding. Your experience with Hadoop projects, including data processing and representation using various AWS services, will be valuable in this role. You should have at least 4 years of experience in data engineering, with a specialization in big data technologies such as Spark and Kafka. A minimum of 2 years of hands-on experience with Databricks is essential for this position. A strong understanding of data architecture, ETL processes, and data warehousing is necessary, along with proficiency in programming languages like Python or Java. Experience with cloud platforms such as AWS, Azure, and GCP, as well as familiarity with big data tools, will be beneficial. Excellent communication, interpersonal, and leadership skills are required to effectively collaborate with team members and clients. You should be able to work in a fast-paced environment, managing multiple priorities efficiently. In addition to technical skills, you should possess solid written, verbal, and presentation communication abilities. Being a strong team player while also capable of working independently is crucial. Maintaining composure in various situations, collaborative nature, high standards of professionalism, and consistently delivering high-quality results are expected from you. Your self-sufficiency and openness to creative solutions will be key in addressing any challenges that may arise in the role.,

Posted 1 day ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As a Business Intelligence Specialist at Adobe, you will have the opportunity to work closely with Business analysts to understand design specifications and translate requirements into technical models, dashboards, reports, and applications. Your role will involve collaborating with business users to cater to their ad-hoc requests and deliver scalable solutions on MSBI platforms. You will be responsible for system integration of data sources, creating technical documents, and ensuring data and code quality through standard methodologies and processes. To succeed in this role, you should have at least 3 years of experience in SSIS, SSAS, Data Warehousing, Data Analysis, and Business Intelligence. You should also possess advanced proficiency in Data Warehousing tools and technologies, including databases, SSIS, and SSAS, along with in-depth understanding of Data Warehousing principles and Dimensional Modeling techniques. Hands-on experience in ETL processes, database optimization, and query tuning is essential. Familiarity with cloud platforms such as Azure and AWS, as well as Python or PySpark and Databricks, would be beneficial. Experience in creating interactive dashboards using Power BI is an added advantage. In addition to technical skills, strong problem-solving and analytical abilities, quick learning capabilities, and excellent communication and presentation skills are important for this role. A Bachelor's degree in Computer Science, Information Technology, or an equivalent technical discipline is required. At Adobe, we value a free and open marketplace for all employees and provide internal growth opportunities for your career development. We encourage creativity, curiosity, and continuous learning as part of your career journey. To prepare for internal opportunities, update your Resume/CV and Workday profile, explore the Internal Mobility page on Inside Adobe, and check out tips to help you prep for interviews. The Talent Team will reach out to you within 2 weeks of applying for a role via Workday, and if you move forward in the interview process, inform your manager for support in your career growth. Join Adobe to work in an exceptional environment with colleagues committed to helping each other grow through ongoing feedback. If you are looking to make an impact and grow your career, Adobe is the place for you. Discover more about employee experiences on the Adobe Life blog and explore the meaningful benefits we offer. For any accommodation needs during the application process, please contact accommodations@adobe.com.,

Posted 1 day ago

Apply

8.0 - 12.0 years

0 Lacs

chandigarh

On-site

As a Solution Architect, your primary responsibility will be to design and implement scalable data integration solutions using Oracle Data Integrator (ODI). You will utilize Python for advanced data transformation, automation, and orchestration tasks. It will be crucial for you to translate business requirements into comprehensive end-to-end data solutions, prioritizing performance, maintainability, and regulatory compliance. Collaboration with stakeholders from various teams such as data engineering, analytics, compliance, and business will be essential to define architecture standards and ensure alignment. In this role, you will lead technical design sessions, develop architecture documents, and provide mentorship to development teams on industry best practices. Ensuring that data governance, privacy, and security standards are integrated into the architecture will be a key focus. You will also drive the migration and modernization of legacy healthcare systems onto contemporary data platforms, whether on-premise or on the cloud. Troubleshooting and optimizing complex data pipelines and integration workflows will also be part of your responsibilities. To excel in this position, you should possess at least 8 years of experience in data architecture, data engineering, or related technical roles. A strong command of Oracle Data Integrator (ODI), particularly for enterprise-scale ETL/ELT workflows, is essential. Proficiency in Python for scripting, data wrangling, and automation is required. Additionally, you must have a solid understanding of data modeling, data warehousing, and healthcare data standards such as HL7, FHIR, ICD, and CPT. Familiarity with HIPAA compliance and healthcare data privacy/security practices is expected. Experience in designing and implementing cloud-based data architectures, including platforms like OCI, AWS, and Azure, will be advantageous. Strong expertise in SQL and database optimization, with experience in Oracle, PostgreSQL, or similar databases, will also be beneficial for this role.,

Posted 1 day ago

Apply

15.0 - 19.0 years

0 Lacs

hyderabad, telangana

On-site

We are looking for a highly skilled and experienced Data Architect to join our team. With at least 15 years of experience in Data engineering and Analytics, the ideal candidate will have a proven track record of designing and implementing complex data solutions. As a senior principal data architect, you will play a key role in designing, creating, deploying, and managing Blackbaud's data architecture. This position holds significant technical influence within the Data Platform, Data Engineering teams, and the Data Intelligence Center of Excellence at Blackbaud. You will act as an evangelist for proper data strategy across various teams within Blackbaud, and provide technical guidance, particularly in the area of data, for other projects. Responsibilities: - Develop and direct the strategy for all aspects of Blackbaud's Data and Analytics platforms, products, and services. - Set, communicate, and facilitate technical direction for the AI Center of Excellence and beyond collaboratively. - Design and develop innovative products, services, or technological advancements in the Data Intelligence space to drive business expansion. - Collaborate with product management to create technical solutions that address customer business challenges. - Take ownership of technical data governance practices to ensure data sovereignty, privacy, security, and regulatory compliance. - Challenge existing practices and drive innovation in the data space. - Create a data access strategy to securely democratize data and support research, modeling, machine learning, and artificial intelligence initiatives. - Contribute to defining tools and pipeline patterns used by engineers and data engineers for data transformation and analytics support. - Work within a cross-functional team to translate business requirements into data architecture solutions. - Ensure that data solutions prioritize performance, scalability, and reliability. - Mentor junior data architects and team members. - Stay updated on technology trends such as distributed computing, big data concepts, and architecture. - Advocate internally for the transformative power of data at Blackbaud. Required Qualifications: - 15+ years of experience in data and advanced analytics. - Minimum of 8 years of experience with data technologies in Azure/AWS. - Proficiency in SQL and Python. - Expertise in SQL Server, Azure Data Services, and other Microsoft data technologies. - Familiarity with Databricks, Microsoft Fabric. - Strong grasp of data modeling, data warehousing, data lakes, data mesh, and data products. - Experience with machine learning. - Excellent communication and leadership abilities. Preferred Qualifications: - Experience with .Net/Java and Microservice Architecture.,

Posted 1 day ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

Be a part of a dynamic team and excel in an environment that values diversity and creativity. Continue to sharpen your skills and ambition while pushing the industry forward. As a Data Architect at JPMorgan Chase within the Employee Platforms, you serve as a seasoned member of a team to develop high-quality data architecture solutions for various software applications and platforms. By incorporating leading best practices and collaborating with teams of architects, you are an integral part of carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm's business objectives. In this role, you will be responsible for designing and implementing data models that support our organization's data strategy. You will work closely with Data Product Managers, Engineering teams, and Data Governance teams to ensure the delivery of high-quality data products that meet business needs and adhere to best practices. Job responsibilities include: - Executing data architecture solutions and technical troubleshooting with the ability to think beyond routine or conventional approaches to build solutions and break down problems. - Collaborating with Data Product Managers to understand business requirements and translate them into data modeling specifications. Conducting interviews and workshops with stakeholders to gather detailed data requirements. - Creating and maintaining data dictionaries, entity-relationship diagrams, and other documentation to support data models. - Producing secure and high-quality production code and maintaining algorithms that run synchronously with appropriate systems. - Evaluating data architecture designs and providing feedback on recommendations. - Representing the team in architectural governance bodies. - Leading the data architecture team in evaluating new technologies to modernize the architecture using existing data standards and frameworks. - Gathering, analyzing, synthesizing, and developing visualizations and reporting from large, diverse data sets in service of continuous improvement of data frameworks, applications, and systems. - Proactively identifying hidden problems and patterns in data and using these insights to drive improvements to coding hygiene and system architecture. - Contributing to data architecture communities of practice and events that explore new and emerging technologies. Required qualifications, capabilities, and skills: - Formal training or certification in Data Architecture and 3+ years of applied experience. - Hands-on experience in data platforms, cloud services (e.g., AWS, Azure, or Google Cloud), and big data technologies. - Strong understanding of database management systems, data warehousing, and ETL processes. - Excellent communication and collaboration skills, with the ability to work effectively with cross-functional teams. - Knowledge of data governance principles and best practices. - Ability to evaluate current technologies to recommend ways to optimize data architecture. - Hands-on practical experience in system design, application development, testing, and operational stability. - Experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming and database querying languages. - Overall knowledge of the Software Development Life Cycle. - Solid understanding of agile methodologies such as continuous integration and delivery, application resiliency, and security. - Demonstrated knowledge of software applications and technical processes within a technical discipline (e.g., cloud, artificial intelligence, machine learning, mobile, etc.) Preferred qualifications, capabilities, and skills: - Experience with cloud-based data platforms (e.g., AWS, Azure, Google Cloud). - Familiarity with big data technologies (e.g., Hadoop, Spark). - Certification in data modeling or data architecture.,

Posted 1 day ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

We are seeking a skilled Data Analyst with exceptional communication abilities and in-depth proficiency in SQL, Tableau, and contemporary data warehousing technologies. As a Data Analyst, you will be responsible for designing data models, creating insightful dashboards, ensuring data quality, and extracting valuable insights from extensive datasets to aid strategic business decisions. Your primary responsibilities will include writing advanced SQL queries to extract and manipulate data from cloud data warehouses like Snowflake, Redshift, or BigQuery. You will design and implement data models that cater to analytical and reporting requirements, as well as develop dynamic, interactive dashboards and reports utilizing tools such as Tableau, Looker, or Domo. Additionally, you will engage in advanced analytics techniques like cohort analysis, time series analysis, scenario analysis, and predictive analytics. Ensuring data accuracy through thorough quality assurance checks, investigating data issues, and collaborating with BI or data engineering teams for root cause analysis will also be part of your role. Effective communication of analytical insights to stakeholders is crucial in this position. The ideal candidate must possess excellent communication skills, have at least 5 years of experience in data analytics, BI analytics, or BI engineering roles, and exhibit expert-level proficiency in SQL. Proficiency in data visualization tools like Tableau, Looker, or Domo is essential, along with a strong grasp of data modeling principles and best practices. Hands-on experience with cloud data warehouses such as Snowflake, Redshift, BigQuery, SQL Server, or Oracle is required. Intermediate-level proficiency in spreadsheet tools like Excel, Google Sheets, or Power BI is necessary, including functions, pivots, and lookups. A Bachelor's or advanced degree in a relevant field like Data Science, Computer Science, Statistics, Mathematics, or Information Systems is preferred. The ability to collaborate with cross-functional teams, including BI engineers, to enhance reporting solutions is vital. Experience in managing large-scale enterprise data environments is advantageous, and familiarity with data governance, data cataloging, and metadata management tools is a plus. This is a full-time position with benefits such as health insurance, paid time off, and Provident Fund. The work schedule is Monday to Friday, and the job requires in-person presence. Education requirements include a Bachelor's degree, and candidates should have at least 5 years of experience in data analytics and 2 years of experience with Tableau. Job Type: Full-time Benefits: - Health insurance - Paid time off - Provident Fund Schedule: Monday to Friday Education: Bachelor's (Required) Experience: - Data analytics: 5 years (Required) - Tableau: 2 years (Required) Work Location: In person,

Posted 1 day ago

Apply

5.0 - 9.0 years

0 Lacs

kolkata, west bengal

On-site

As a Data Modeler specializing in Hybrid Data Environments, you will play a crucial role in designing, developing, and optimizing data models that facilitate enterprise-level analytics, insights generation, and operational reporting. You will collaborate with business analysts and stakeholders to comprehend business processes and translate them into effective data modeling solutions. Your expertise in traditional data stores such as SQL Server and Oracle DB, along with proficiency in Azure/Databricks cloud environments, will be essential in migrating and optimizing existing data models. Your responsibilities will include designing logical and physical data models that capture the granularity of data required for analytical and reporting purposes. You will establish data modeling standards and best practices to maintain data architecture integrity and collaborate with data engineers and BI developers to ensure data models align with analytical and operational reporting needs. Conducting data profiling and analysis to understand data sources, relationships, and quality will inform your data modeling process. Your qualifications should include a Bachelor's degree in Computer Science, Information Systems, Data Science, or a related field, along with a minimum of 5 years of experience in data modeling. Proficiency in SQL, familiarity with data modeling tools, and understanding of Azure cloud services, Databricks, and big data technologies are essential. Your ability to translate complex business requirements into effective data models, strong analytical skills, attention to detail, and excellent communication and collaboration abilities will be crucial in this role. In summary, as a Data Modeler for Hybrid Data Environments, you will drive the development and maintenance of data models that support analytical and reporting functions, contribute to the establishment of data governance policies and procedures, and continuously refine data models to meet evolving business needs and leverage new data modeling techniques and cloud capabilities.,

Posted 1 day ago

Apply

6.0 - 10.0 years

0 Lacs

haryana

On-site

As a PowerBI Developer, you will be responsible for developing and maintaining scalable data pipelines using Python and PySpark. You will collaborate with data engineers and data scientists to fulfill data processing needs and optimize existing PySpark applications for performance improvements. Writing clean, efficient, and well-documented code following best practices is a crucial part of your role. Additionally, you will participate in design and code reviews, develop and implement ETL processes, and ensure data integrity and quality throughout the data lifecycle. Staying current with the latest industry trends and technologies in big data and cloud computing is essential. The ideal candidate should have a minimum of 6 years of experience in designing and developing advanced Power BI reports and dashboards. Working experience on data modeling, DAX calculations, developing and maintaining data models, creating reports and dashboards, analyzing and visualizing data, ensuring data governance and compliance, as well as troubleshooting and optimizing Power BI solutions. Preferred skills for this role include strong proficiency in Power BI Desktop, DAX, Power Query, and data modeling. Experience in analyzing data, creating visualizations, building interactive dashboards, connecting to various data sources, and transforming data is highly valued. Excellent communication and collaboration skills are necessary to work effectively with stakeholders. Familiarity with SQL, data warehousing concepts, and experience with UI/UX development would be beneficial.,

Posted 1 day ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As a Pricing Revenue Growth Consultant, your primary role will be to advise on building a pricing and promotion tool for a Consumer Packaged Goods (CPG) client. This tool will encompass pricing strategies, trade promotions, and revenue growth initiatives. You will be responsible for developing analytics and machine learning models to analyze price elasticity, promotion effectiveness, and trade promotion optimization. Collaboration with CPG business, marketing, data scientists, and other teams will be essential for the successful delivery of the project and tool. Your Business Domain Skills will be crucial in this role, including expertise in Trade Promotion Management (TPM), Trade Promotion Optimization (TPO), Promotion Depth Frequency Forecasting, Price Pack Architecture, Competitive Price Tracking, Revenue Growth Management, and Financial Modeling. Additionally, you will need proficiency in AI, Machine Learning for Pricing, and Dynamic pricing implementation. Key Responsibilities: - Utilize Consulting Skills for hypothesis-driven problem solving, Go-to-Market pricing, and revenue growth execution. - Conduct Advisory Presentations and Data Storytelling. - Provide Project Leadership and Execution. In terms of Technical Requirements, you should possess: - Proficiency in programming languages such as Python and R for data manipulation and analysis. - Expertise in machine learning algorithms and statistical modeling techniques. - Familiarity with data warehousing, data pipelines, and data visualization tools like Tableau or Power BI. - Experience in Cloud platforms like ADF, Databricks, Azure, and their AI services. Your Additional Responsibilities will include: - Working collaboratively with cross-functional teams across sales, marketing, and product development. - Managing stakeholders and leading teams. - Thriving in a fast-paced environment focused on delivering timely insights to support business decisions. - Demonstrating excellent problem-solving skills and the ability to address complex technical challenges. - Communicating effectively with cross-functional teams and stakeholders. - Managing multiple projects simultaneously and prioritizing tasks based on business impact. Qualifications: - A degree in Data Science or Computer Science with a specialization in data science. - A Master's in Business Administration and Analytics is preferred. Preferred Skills: - Experience in Technology, Big Data, and Text Analytics.,

Posted 1 day ago

Apply

2.0 - 10.0 years

0 Lacs

hyderabad, telangana

On-site

As a Principal Data Engineer at Brillio, you will play a crucial role in leveraging your expertise in data modeling, particularly with tools like ER Studio and ER Win. Your specialization lies in transforming disruptive technologies into a competitive advantage for Fortune 1000 companies through innovative digital adoption. Brillio prides itself on being a rapidly growing digital technology service provider that excels in integrating cutting-edge digital skills with a client-centric approach. As a preferred employer, Brillio attracts top talent by offering opportunities to work on exclusive digital projects and engage with groundbreaking technologies. To excel in this role, you should have over 10 years of IT experience, with a minimum of 2 years dedicated to Snowflake and hands-on modeling experience. Your proficiency in Data Lake/ODS, ETL concepts, and data warehousing principles will be critical in delivering effective solutions to clients. Collaboration with clients to drive both physical and logical model solutions will be a key aspect of your responsibilities. Your technical skills should encompass advanced data modeling concepts, experience in modeling large volumes of data, and proficiency in tools like ER Studio. A solid grasp of database concepts, including data warehouses, reporting, and analytics, will be essential. Familiarity with platforms like SQLDBM and expertise in entity relationship modeling will further strengthen your profile. Moreover, your communication skills should be excellent, enabling you to lead teams effectively and facilitate seamless collaboration with clients. While exposure to AWS ecosystems is a plus, your ability to design and administer databases, develop SQL queries for analysis, and implement data modeling for star schema designs will be instrumental in your success as a Principal Data Engineer at Brillio.,

Posted 1 day ago

Apply

8.0 - 12.0 years

0 Lacs

noida, uttar pradesh

On-site

As a Senior Data Engineering Architect at Iris Software, you will play a crucial role in leading enterprise-level data engineering projects on public cloud platforms like AWS, Azure, or GCP. Your responsibilities will include engaging with client managers to understand their business needs, conceptualizing solution options, and finalizing strategies with stakeholders. You will also be involved in team building, delivering Proof of Concepts (PoCs), and enhancing competencies within the organization. Your role will focus on building competencies in Data & Analytics, including Data Engineering, Analytics, Data Science, AI/ML, and Data Governance. Staying updated with the latest tools, best practices, and trends in the Data and Analytics field will be essential to drive innovation and excellence in your work. To excel in this position, you should hold a Bachelor's or Master's degree in a Software discipline and have extensive experience in Data architecture and implementing large-scale Data Lake/Data Warehousing solutions. Your background in Data Engineering should demonstrate leadership in solutioning, architecture, and successful project delivery. Strong communication skills in English, both written and verbal, are essential for effective collaboration with clients and team members. Proficiency in tools such as AWS Glue, Redshift, Azure Data Lake, Databricks, Snowflake, and databases, along with programming skills in Spark, Spark SQL, PySpark, and Python, are mandatory competencies for this role. Joining Iris Software offers a range of perks and benefits designed to support your financial, health, and overall well-being. From comprehensive health insurance and competitive salaries to flexible work arrangements and continuous learning opportunities, we are dedicated to providing a supportive and rewarding work environment where your success and happiness are valued. If you are inspired to grow your career in Data Engineering and thrive in a culture that values talent and personal growth, Iris Software is the place for you. Be part of a dynamic team where you can be valued, inspired, and encouraged to be your best professional and personal self.,

Posted 1 day ago

Apply

4.0 - 8.0 years

0 Lacs

haryana

On-site

As an ETL Developer, you will play a key role in supporting the design, development, and maintenance of enterprise data integration solutions. Your main responsibilities will include designing, developing, and implementing ETL workflows using SSIS and/or Informatica PowerCenter. You will be expected to extract, transform, and load data from various sources such as SQL Server, Oracle, flat files, APIs, Excel, and cloud platforms. Furthermore, you will need to optimize existing ETL processes for improved performance, reliability, and scalability. Unit testing, integration testing, and data validation will be crucial to ensure data quality and consistency. Maintaining technical documentation for ETL processes, mappings, and workflows is also an essential part of your role. Collaboration with data architects, BI analysts, and business stakeholders will be necessary to understand data requirements and deliver clean, structured data solutions. Monitoring daily data loads, resolving ETL failures promptly, and ensuring data security, integrity, and compliance are additional responsibilities. Your involvement in code reviews, peer testing, and production deployment activities will be vital for the success of projects. Your technical skills should include strong hands-on experience in SSIS and/or Informatica PowerCenter development, proficient SQL programming abilities, and familiarity with ETL performance tuning and error handling. Knowledge of data modeling concepts, data warehousing principles, and familiarity with slowly changing dimensions (SCDs) is essential. Exposure to source control systems, job schedulers, cloud-based data platforms, and understanding of data governance and compliance standards will be advantageous. To qualify for this role, you should hold a Bachelor's degree in Computer Science, Information Technology, or a related field, along with at least 3-5 years of relevant experience in ETL development using SSIS and/or Informatica. Strong problem-solving skills, analytical thinking, excellent communication abilities, and the capacity to work both independently and in a team-oriented environment are required. Preferred certifications such as Microsoft Certified: Azure Data Engineer Associate, Informatica PowerCenter Developer Certification, or any SQL/BI/ETL-related certifications would be beneficial but are optional.,

Posted 1 day ago

Apply

3.0 - 7.0 years

0 Lacs

noida, uttar pradesh

On-site

As a Teradata ETL Developer, you will be responsible for designing, developing, and implementing ETL processes using Teradata tools like BTEQ and TPT Utility. Your role will involve optimizing and enhancing existing ETL workflows to improve performance and reliability. Collaboration with cross-functional teams to gather data requirements and translate them into technical specifications will be a key aspect of your responsibilities. Data profiling, cleansing, and validation will also be part of your duties to ensure data quality and integrity. Monitoring ETL processes, troubleshooting any issues in the data pipeline, and participating in the technical design and architecture of data integration solutions are critical tasks you will perform. Additionally, documenting ETL processes, data mapping, and operational procedures for future reference and compliance will be essential. To excel in this role, you should possess proven experience as a Teradata ETL Developer with a strong understanding of BTEQ and TPT Utility. A solid grasp of data warehousing concepts, ETL methodologies, and data modeling is required. Proficiency in SQL, including the ability to write complex queries for data extraction and manipulation, is essential. Familiarity with data integration tools and techniques, especially in a Teradata environment, will be beneficial. Strong analytical and problem-solving skills are necessary to diagnose and resolve ETL issues efficiently. You should be able to work collaboratively in a team environment while also demonstrating self-motivation and attention to detail. Excellent communication skills are a must to effectively engage with both technical and non-technical stakeholders.,

Posted 1 day ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

As a Staff Cloud Support Engineer at Snowflake, you will be a crucial part of the Snowflake Support team, dedicated to providing high-quality resolutions to help customers achieve data-driven business insights and results. You will work with a team of subject matter experts to ensure customer success by listening, learning, and building strong connections with customers. Your responsibilities will include working on a variety of technical issues related to operating systems, database technologies, big data, data integration, connectors, and networking. Customers will rely on you for technical guidance and expert advice on the effective and optimal use of Snowflake Data Warehouse. You will also be the voice of the customer, providing valuable product feedback and suggestions for improvement to Snowflake's product and engineering teams. In addition to providing exceptional service to customers, you will play a key role in building knowledge within the team and contributing to strategic initiatives for organizational and process improvements. Depending on business needs, you may work with Snowflake Priority Support customers, understanding their use cases and helping them achieve the highest levels of continuity and performance from their Snowflake implementation. To excel in this role, you should have a Bachelor's or Master's degree in Computer Science or equivalent discipline, along with at least 8 years of experience in a Technical Support environment or a similar customer-facing technical role. You should possess excellent writing and communication skills in English, attention to detail, and the ability to work collaboratively across global teams. As a Staff Cloud Support Engineer, you will drive technical solutions to complex problems, adhere to response and resolution SLAs, and demonstrate strong problem-solving skills. You will utilize the Snowflake environment, connectors, and third-party partners for investigating issues, document solutions, and submit well-documented bugs and feature requests. Additionally, you will proactively identify recommendations for product quality improvement, customer experience enhancement, and team efficiencies. It is essential for you to have a clear understanding of data warehousing fundamentals and concepts, debug and troubleshoot complex SQL queries, and have strong knowledge of RDBMS, SQL data types, aggregations, and functions. Experience with database migration, ETL, scripting/coding in any programming language, and working knowledge of semi-structured data is also required. Proficiency in interpreting system performance metrics and understanding cloud service providers" ecosystems is beneficial. If you have experience working with distributed databases, troubleshooting various operating systems, understanding networking fundamentals, cloud computing security concepts, and proficiency in scripting languages such as Python and JavaScript, it would be a plus. Snowflake is looking for individuals who share their values, challenge conventional thinking, and drive innovation while contributing to the company's growth and success.,

Posted 1 day ago

Apply

Exploring Data Warehousing Jobs in India

The data warehousing job market in India is thriving with numerous opportunities for skilled professionals. Data warehousing is a crucial aspect of many organizations as it involves the storage, management, and analysis of large volumes of data to drive business decisions and insights.

Top Hiring Locations in India

  1. Bangalore
  2. Hyderabad
  3. Pune
  4. Mumbai
  5. Chennai

Average Salary Range

The salary range for data warehousing professionals in India varies based on experience and location. Entry-level positions can expect a salary starting from INR 4-6 lakhs per annum, while experienced professionals can earn upwards of INR 10-15 lakhs per annum.

Career Path

In the field of data warehousing, a typical career path may include roles such as: - Junior Data Engineer - Data Analyst - Data Warehouse Developer - Senior Data Warehouse Architect - Data Warehouse Manager - Chief Data Officer

Related Skills

In addition to expertise in data warehousing, professionals in this field are often expected to have knowledge of: - SQL - ETL Tools - Data Modeling - Business Intelligence Tools - Hadoop - Python or R programming

Interview Questions

  • What is data warehousing? (basic)
  • Explain the difference between OLTP and OLAP. (basic)
  • What is a star schema in data warehousing? (medium)
  • How do you handle data quality issues in a data warehouse? (medium)
  • What is ETL and explain its process. (medium)
  • What are the advantages of using a data warehouse? (basic)
  • Explain the concept of data mining in data warehousing. (medium)
  • How do you optimize a data warehouse for performance? (advanced)
  • What is a slowly changing dimension and how do you handle it? (medium)
  • Describe the process of data extraction in a data warehouse. (medium)
  • What is a data mart? (basic)
  • How do you ensure data security in a data warehouse environment? (medium)
  • Explain the concept of data aggregation in data warehousing. (medium)
  • What are the different types of data warehouse architecture? (medium)
  • How do you handle data integration in a data warehouse? (medium)
  • What is a fact table and a dimension table in a data warehouse? (basic)
  • How do you handle data partitioning in a data warehouse? (advanced)
  • Explain the concept of data cleansing in data warehousing. (medium)
  • What is the role of metadata in a data warehouse? (basic)
  • How do you design a data warehouse schema? (advanced)
  • What are the key components of a data warehouse system? (basic)
  • How do you handle data loads in a data warehouse? (medium)
  • Explain the process of data transformation in a data warehouse. (medium)
  • What are the challenges faced in data warehousing projects? (advanced)
  • How do you ensure data consistency in a data warehouse? (advanced)

Closing Remark

As you explore data warehousing jobs in India, remember to continuously update your skills and knowledge in this field. By preparing thoroughly and applying confidently, you can land a rewarding career in data warehousing. Good luck with your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies