Jobs
Interviews

6 Olap Systems Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 9.0 years

0 Lacs

karnataka

On-site

You should have at least 5 years of IT experience with a good understanding of analytics tools for effective data analysis. You must be capable of leading teams and have prior experience working in production deployment and production support teams. Proficiency in Big Data tools such as Hadoop, Spark, Apache Beam, Kafka, etc., and object-oriented/object function scripting languages like Python, Java, C++, Scala, etc., is required. Experience with DW tools like BQ, Redshift, Synapse, or Snowflake, as well as ETL and Data Warehousing, is essential. You should have a solid understanding of relational and non-relational databases such as MySQL, MS SQL Server, Postgres, MongoDB, Cassandra, etc., and familiarity with cloud platforms like AWS, GCP, and Azure. Experience with workflow management tools like Apache Airflow is also necessary. As part of your responsibilities, you will be required to develop high-performance and scalable solutions using GCP for extracting, transforming, and loading big data. You will design and build production-grade data solutions from ingestion to consumption using Java or Python, as well as optimize data models on the GCP cloud using data stores like BigQuery. Handling the deployment process, optimizing data pipelines for performance and cost efficiency for large-scale data lakes, and writing complex queries across extensive datasets are also key tasks. Collaboration with Data Engineers to select the appropriate tools for delivering product features through POCs, engaging with business stakeholders, BAs, and other Data/ML engineers, and exploring new use cases for existing data are also part of the role. Preferred qualifications include familiarity with design best practices for OLTP and OLAP systems, participation in DB and pipeline design, exposure to load testing methodologies, debugging pipelines, and delta load handling, as well as involvement in heterogeneous migration projects.,

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

hyderabad, telangana, india

On-site

Job Role: Senior Database Engineer Location: Hyderabad / Gurgaon / Noida Start Date: As soon as possible Key Responsibilities Build data pipelines for optimal extraction, transformation, and loading from various data sources using SQL and cloud database technologies. Work with stakeholders (Executive, Product, Data, Design teams) to support data-related technical issues. Collaborate with data and analytics experts to enhance data system functionality. Assemble large, complex data sets meeting business requirements. Analyze and improve existing SQL code for performance, security, and maintainability. Design and implement internal process improvements (automation, scalability). Unit test databases and perform bug fixes. Develop best practices for database design and development. Lead database projects across scrum teams. Support dashboard development through exploratory data analysis (desirable). Key Requirements: Experience: 812 years preferred Required Skills Strong SQL experience, especially with PostgreSQL (cloud-hosted in AWS/Azure/GCP). Experience with cloud-based data warehouses like Snowflake (preferred) or Azure Synapse. Proficiency in ETL/ELT tools like IBM StreamSets, SnapLogic, DBT. Knowledge of data modeling and OLAP systems. Deep understanding of databases, data marts, and enterprise systems. Expertise in data ingestion, cleaning, de-duping. Ability to fine-tune report queries and design indexes. Familiarity with SQL security techniques (e.g., column-level encryption, TDE). Experience mapping source data into ER models (desirable). Adherence to database standards (naming conventions, architecture). Exposure to source control tools (GIT, Azure DevOps). Understanding of Agile methodologies (Scrum, Kanban). Experience with NoSQL databases and real-time replication (desirable). Experience with CI/CD automation tools (desirable). Programming experience in Golang, Python, and visualization tools (Power BI/Tableau) (desirable). Personal Attributes: Strong communication skills. Ability to work in distributed teams. Capable of managing multiple timelines. Able to articulate data insights for business decisions. Comfortable with ambiguity and risk management. Able to explain complex concepts to non-data audiences. Show more Show less

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

navi mumbai, maharashtra

On-site

You should have a minimum of 5 years of IT experience with a good understanding of analytics tools for effective data analysis. You must have prior experience working in production deployment and production support teams. It is essential to have hands-on experience with Big Data tools such as Hadoop, Spark, Apache Beam, and Kafka. Proficiency in object-oriented/object function scripting languages like Python, Java, C++, Scala, etc., is required. Additionally, experience with any Data Warehousing tools like BQ, Redshift, Synapse, or Snowflake is preferred. You should possess expertise in ETL processes and Data Warehousing concepts. A strong understanding and experience with both relational and non-relational databases like MySQL, MS SQL Server, Postgres, MongoDB, and Cassandra are necessary. Experience working with cloud platforms like GCP and workflow management tools such as Apache Airflow is a plus. Preferred qualifications include knowledge of design best practices for OLTP and OLAP Systems, participation in database and pipeline design, exposure to load testing methodologies, debugging pipelines, and handling delta loads, as well as experience in heterogeneous migration projects. As part of your roles and responsibilities, you will be responsible for developing high-performance and scalable solutions using GCP for extracting, transforming, and loading big data. You will design and build production-grade data solutions from data ingestion to consumption using Java/Python. Additionally, you will design and optimize data models on GCP cloud utilizing GCP data stores like BigQuery, handle deployment processes efficiently, and optimize data pipelines for performance and cost in large-scale data lakes. Your tasks will also include writing complex and optimized queries across large datasets, creating data processing layers, collaborating closely with Data Engineers to identify the right tools for delivering product features through POC, and interacting with business stakeholders, BAs, and other Data/ML engineers as a collaborative team player. Moreover, you will be expected to research new use cases for existing data.,

Posted 3 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

navi mumbai, maharashtra

On-site

As an experienced IT professional with over 5 years of experience, you should have a good understanding of analytics tools to effectively analyze data. Your previous roles may have involved working in production deployment and production support teams. You must be familiar with Big Data tools such as Hadoop, Spark, Apache Beam, and Kafka. Additionally, your expertise should include object-oriented/object function scripting languages like Python, Java, C++, and Scala. Experience with data warehousing tools like BQ, Redshift, Synapse, or Snowflake is essential. You should also be well-versed in ETL processes and have a strong understanding of relational and non-relational databases including MySQL, MS SQL Server, Postgres, MongoDB, and Cassandra. Familiarity with cloud platforms like AWS, GCP, and Azure is also required, along with experience in workflow management using tools like Apache Airflow. In your role, you will be expected to develop high-performance and scalable solutions using GCP for extracting, transforming, and loading big data. You will design and build production-grade data solutions from ingestion to consumption using Java or Python. Optimizing data models on GCP cloud with data stores such as BigQuery will be part of your responsibilities. Furthermore, you should be capable of handling the deployment process, optimizing data pipelines for performance and cost in large-scale data lakes, and writing complex queries across large data sets. Collaboration with Data Engineers to identify the right tools for delivering product features is essential, as well as researching new use cases for existing data. Preferred qualifications include awareness of design best practices for OLTP and OLAP systems, participation in team designing the database and pipeline, exposure to load testing methodologies, debugging pipelines, and handling delta loads in heterogeneous migration projects. Overall, you should be a collaborative team player who interacts effectively with business stakeholders, BAs, and other Data/ML engineers to drive innovation and deliver impactful solutions.,

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

chennai, tamil nadu

On-site

Wipro Limited is a leading technology services and consulting company dedicated to developing innovative solutions that cater to the most complex digital transformation needs of clients. Our comprehensive range of consulting, design, engineering, and operational capabilities enables us to assist clients in achieving their most ambitious goals and establishing sustainable, future-ready businesses. With a global presence of over 230,000 employees and business partners spanning 65 countries, we remain committed to supporting our customers, colleagues, and communities in navigating an ever-evolving world. We are currently seeking an individual with hands-on experience in data modeling for both OLTP and OLAP systems. The ideal candidate should possess a deep understanding of Conceptual, Logical, and Physical data modeling, coupled with a robust grasp of indexing, partitioning, and data sharding, supported by practical experience. Experience in identifying and mitigating factors impacting database performance for near-real-time reporting and application interaction is essential. Proficiency in at least one data modeling tool, preferably DB Schema, is required. Additionally, functional knowledge of the mutual fund industry would be beneficial. Familiarity with GCP databases such as Alloy DB, Cloud SQL, and Big Query is preferred. The role demands willingness to work from our Chennai office, with a mandatory presence on-site at the customer site requiring five days of work per week. Cloud-PaaS-GCP-Google Cloud Platform is a mandatory skill set for this position. The successful candidate should have 5-8 years of relevant experience and should be prepared to contribute to the reimagining of Wipro as a modern digital transformation partner. We are looking for individuals who are inspired by reinvention - of themselves, their careers, and their skills. At Wipro, we encourage continuous evolution, reflecting our commitment to adapt to the changing world around us. Join us in a business driven by purpose, where you have the freedom to shape your own reinvention. Realize your ambitions at Wipro. We welcome applications from individuals with disabilities. For more information, please visit www.wipro.com.,

Posted 1 month ago

Apply

6.0 - 10.0 years

0 Lacs

chennai, tamil nadu

On-site

As a Data Modeller specializing in GCP and Cloud Databases, you will play a crucial role in designing and optimizing data models for both OLTP and OLAP systems. Your expertise in cloud-based databases, data architecture, and modeling will be essential in collaborating with engineering and analytics teams to ensure efficient operational systems and real-time reporting pipelines. You will be responsible for designing conceptual, logical, and physical data models tailored for OLTP and OLAP systems. Your focus will be on developing and refining models that support performance-optimized cloud data pipelines, implementing models in BigQuery, CloudSQL, and AlloyDB, as well as designing schemas with indexing, partitioning, and data sharding strategies. Translating business requirements into scalable data architecture and schemas will be a key aspect of your role, along with optimizing for near real-time ingestion, transformation, and query performance. You will utilize tools like DBSchema for collaborative modeling and documentation while creating and maintaining metadata and documentation around models. In terms of required skills, hands-on experience with GCP databases (BigQuery, CloudSQL, AlloyDB), a strong understanding of OLTP and OLAP systems, and proficiency in database performance tuning are essential. Additionally, familiarity with modeling tools such as DBSchema or ERWin, as well as a proficiency in SQL, schema definition, and normalization/denormalization techniques, will be beneficial. Preferred skills include functional knowledge of the Mutual Fund or BFSI domain, experience integrating with cloud-native ETL and data orchestration pipelines, and familiarity with schema version control and CI/CD in a data context. In addition to technical skills, soft skills such as strong analytical and communication abilities, attention to detail, and a collaborative approach across engineering, product, and analytics teams are highly valued. Joining this role will provide you with the opportunity to work on enterprise-scale cloud data architectures, drive performance-oriented data modeling for advanced analytics, and collaborate with high-performing cloud-native data teams.,

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies