Jobs
Interviews

60 Partitioning Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a Solaris Administrator, you will be responsible for the installation, implementation, customization, operation, recovery, and performance tuning of Solaris Operating Systems. Your role will involve installing and maintaining all Solaris server hardware and software systems, administering server performance and utilization, and ensuring availability. Additionally, you will be required to prepare program-level and user-level documentation as needed. Your key responsibilities will include supporting infrastructure implementations, deployments, and technologies related to dynamic infrastructure platforms. You will participate in current state system analysis, requirements gathering, and documentation. Moreover, you will contribute to the creation of technical design/implementation documentation and assist in requirements understanding and issue resolution. Furthermore, you will be involved in tasks such as maintaining and installing Oracle ZFS Storage, troubleshooting and maintaining Solaris Operating Systems (8, 9, 10, and 11), patching Solaris systems with Sun cluster and VCS, configuring APACHE web server on Solaris and Linux, creating and extending Volume Groups and file systems, resolving sudo issues, working with VERITAS volume manager and cluster, managing users and groups in NIS and LDAP servers, and installing, upgrading, and patching Solaris servers. You will also handle Solaris server decommissioning, VERITAS cluster monitoring, starting and stopping cluster services, moving resource groups across nodes, increasing file systems in cluster file systems, synchronizing cluster resources, and creating and deleting new cluster service groups and resources. Your expertise should include Solaris server performance monitoring, kernel tuning, and troubleshooting. Additionally, you should have experience working with ticketing tools like Remedy and ManageNow, knowledge of OS clustering, partitioning, virtualization, and storage administration, integration with operating systems, and the ability to troubleshoot capacity and availability issues. You will collaborate with project teams to prepare components for production, provide support for ongoing platform infrastructure availability, and work on prioritized features for ongoing sprints. In this role, you will be accountable for completing the work you lead and deliver quality work to the team. The position falls under the IT Support category with a salary as per market standards. The industry focus is on IT Services & Consulting within the functional area of IT & Information Security. This is a full-time contractual employment opportunity.,

Posted 2 days ago

Apply

3.0 - 7.0 years

0 Lacs

navi mumbai, maharashtra

On-site

The job involves coordinating with all departments of the client to understand their requirements and functional specifications. You must have a strong knowledge of TSYS PRIME, SQL, and Oracle PL/SQL languages, as well as familiarity with APIs. Your responsibilities will include participating in various phases of the SDLC such as design, coding, code reviews, testing, and project documentation, while working closely with co-developers and other related departments. Desired Skills and Qualifications: - Strong knowledge of TSYS PRIME, Oracle PL/SQL language, and APIs - Good exposure to Oracle advanced database concepts like Performance Tuning, Indexing, Partitioning, and Data Modeling - Responsible for database-side development, implementation, and support - Experience in solving daily service requests, incidents, and change requests - Proficient in code review, team management, effort estimation, and resource planning This is a full-time position with a day shift schedule that requires proficiency in English. The work location is in person.,

Posted 3 days ago

Apply

8.0 - 12.0 years

0 Lacs

noida, uttar pradesh

On-site

Cadence is a pivotal leader in electronic design, leveraging over 30 years of computational software expertise. Our Intelligent System Design approach enables us to provide software, hardware, and IP solutions that bring design concepts to life. Our clientele comprises the most innovative companies globally, developing cutting-edge electronic products for diverse market applications such as consumer electronics, hyperscale computing, 5G communications, automotive, aerospace, industrial, and health sectors. At Cadence, you will have the opportunity to work with the latest technology in a stimulating environment that fosters creativity, innovation, and meaningful contributions. Our employee-centric policies prioritize the well-being of our staff, career growth, continuous learning opportunities, and recognizing achievements tailored to individual needs. The "One Cadence One Team" culture encourages collaboration across teams to ensure customer success. We offer various avenues for learning and development based on your interests and requirements, alongside a diverse team of dedicated professionals committed to exceeding expectations daily. We are currently seeking a Database Engineer with a minimum of 8 years of experience to join our team in Noida. The ideal candidate should possess expertise in both SQL and NoSQL databases, particularly PostgreSQL and Elasticsearch. A solid understanding of database architecture, performance optimization, and data modeling is essential. Proficiency in graph databases like JanusGraph and in-memory databases is advantageous. Strong skills in C++ and design patterns are required, with additional experience in Java and JS being desirable. Key Responsibilities: - Hands-on experience in PostgreSQL, including query tuning, indexing, partitioning, and replication. - Proficiency in Elasticsearch, covering query DSL, indexing, and cluster management. - Expertise in SQL and NoSQL databases, with the ability to determine the appropriate database type for specific requirements. - Proven experience in database performance tuning, scaling, and troubleshooting. - Familiarity with Object-Relational Mapping (ORM) is a plus. If you are a proactive Database Engineer looking to contribute your skills to a dynamic team and work on challenging projects at the forefront of electronic design, we encourage you to apply and be part of our innovative journey at Cadence.,

Posted 3 days ago

Apply

3.0 - 7.0 years

3 - 7 Lacs

Bengaluru

Work from Office

Job Overview : We are seeking an exceptional Physical Verification Engineer to take a key role in oursemiconductor design team. As a Block/Fullchip/Partition Physical Verification Engineer , you willResponsible for development and implementation of cutting-edge physical verification methodologiesand flows for complex ASIC designs. You will collaborate closely with cross-functional teams to ensurethe successful delivery of high-quality designs Responsibilities : Drive physical verification DRC, Antenna, LVS, ERC at cutting edge FinFET technology nodesfor various foundries. Physical verification of a complex SOC/ Cores/ Blocks DRC, LVS, ERC, ESD, DFM, Tape out. Work hands-on to solve critical design and execution issues related to physical verificationand sign-off. Own physical verification and sign-off flows, methodologies and execution of SoC/cores. Good hands on Calibre, Virtuoso etc. Requirements : Bachelors or Masters degree in Electrical Engineering or Electronics & Communications. Proficiency in industry-standard EDA tools from Cadence, Synopsys and Mentor Graphics. Strong scripting skills using TCL, Python, or Perl for design automation and tool customization. Expertise in physical verification of Block/Partition/ Full-chip-level DRC, Experience and understanding of all phases of the IC design process from RTL-GDS2. LVS, ERC, DFM Tape out process on cutting edge nodes, Preferably worked on 3nm/5nm/7nm/12nm/14nm/16nm nodes at the major foundries Experience in debugging LVS issues at chip-level/block level with complex analog-mixed signal IPs Experience with design using low-power implementation (level-shifters, isolation cells, power domain/islands, substrate isolation etc.) Experience in physical verification of I/O Ring, corner cells, seal ring, RDL routing, bumps and other full-chip components Good understanding of CMOS/FinFET process and circuit design, base layer related DRCs, ERC rules, latch-up etc. Experience with ERC rules and ESD rules has an added advantage Outstanding communication and interpersonal skills, with the ability to collaborate effectively in a team environment. Proven ability to Engineer and mentor junior engineers, fostering their professional growth and development. Preferred qualifications: Experience with advanced process nodes 3nm, 5nm, 7nm, 10nm including knowledge of FinFET technology. Proven track record with multiple successful final production tape-outs Proven ability to independently deliver results and be able to work hands-on as and guide/help peers to deliver their tasks Be able to work under limited supervision and take complete accountability. Excellent written and verbal communication skills Knowledge on Handling various custom IP such as PLL, Divider, Serdes, ADC, DAC, GPIO, HSIO for PD integration and Physical verification challenges.

Posted 1 week ago

Apply

1.0 - 5.0 years

0 Lacs

noida, uttar pradesh

On-site

Job Description: As a Database Administrator at our Tech-Support department, you will play a crucial role in setting up, configuring, administering, and maintaining various production and development environments. These environments may consist of both Relational databases such as SQL Server, PostgreSQL, MySQL, as well as NoSQL databases like MongoDB or others. You will be based at our Noida office and should have 1-2 years of relevant experience in this field. Your primary responsibility will be to collaborate closely with the tech team to design, build, and operate the database infrastructure. You will provide support to the tech team in identifying optimal solutions for data-related issues such as data modeling, reporting, and data retrieval. Additionally, you will work alongside deployment staff to address any challenges related to the database infrastructure effectively. Requirements: - Ideally, you should hold a BE/B.Tech degree from a reputed institute. - Proficiency in SQL Server/PostgreSQL database administration, maintenance, and tuning is essential. - Experience with database clustering, high availability, replication, backup, auto-recovery, and pooling, such as pgpool2, is required. - Strong familiarity with Logging and Monitoring tools like Nagios, Prometheus, PG-badger, POWA, Data-Dog, etc., is preferred. - Expertise in analyzing complex execution plans and optimizing queries is a must. - Good understanding of various database concepts including indices, views, partitioning, aggregation, window functions, and caching. - Up-to-date knowledge of the latest features in PostgreSQL versions 11/12 and above. - Experience working with AWS and its database-related services like RDS is a necessity. - Previous exposure to other databases like Oracle, MySQL will be advantageous. - Familiarity with NoSQL technologies is a plus. If you meet these requirements and are interested in this role, please share your updated profile with us at hr@techybex.com.,

Posted 1 week ago

Apply

4.0 - 6.0 years

6 - 12 Lacs

Jaipur

Work from Office

Job Summary As a QA Engineer y ou will help contribute to the overall company strategy by validating that the application meets design specifications and requirements. This includes iOS applications, Android applications, Web applications and any other digital to human interfaces. The products should be tested in terms of functionality, performance, reliability, stability, and compatibility with the supported devices, browsers, user interfaces. Responsibilities Take ownership of QA tasks for a number of projects Create clear, concise, and comprehensive test plans, test cases, and other QA documentation Create and maintain non-functional tests (ADA, security, performance, etc.) Work collaboratively within high-performance teams (e.g. product managers, developers, UI, and UX designers) to champion product quality for our clients Perform QA technical testing tasks for Cross Browser and Devices UI Testing, API and database testing Escalate critical issues, blockers, and risks in a timely manner to minimize the impact on project timelines Assess the quality health of a project to determine whether or not the project is ready for release Document and track defects according to the companys standards in a bug tracking system Debug and troubleshoot defects to help our developers fix issues efficiently Participate in reviews of requirements, UX/UI designs, and other deliverables for testability Mentor and provide guidance to junior team members on Quality Assurance best practices, methodologies, and processes. Requirement Gathering: Ability to understand and analyze requirements from business and technical perspectives. Traceability Matrix: Create and maintain traceability matrices to ensure all requirements are covered by test cases. Proficiency in techniques like equivalence partitioning, boundary value analysis, decision table testing, and state transition testing. Proficiency in Black Box Testing,White Box Testing & Exploratory Testing Qualifications 4-6 years of cross-browser, UI/UX, and iOS/Android/desktop manual testing Proficiency with functional testing and experience with non-functional testing Proficient in API and Database Testing You have a deep understanding of software QA methodologies, tools and processes You have a solid understanding of Agile/Scrum development principles. You have experience providing mentorship to other team members on QA best practices and methodologies Youre self-driven and take ownership of tasks from beginning to completion You are able to multitask and work on multiple projects simultaneously within a dynamic, fast-paced environment. You are detail-oriented, incredibly meticulous, and you enjoy the fine attention to detail required to spot, prevent, and troubleshoot any issues before our clients find them Experience with project and bug tracking tools as Jira, Mantis Expertise creating and maintaining QA documentation like test plans, test cases and status reports and collecting metrics Experience being receptive to learn and work with new technologies Ability to be assertive in communication Nice to have You having experience creating and maintaining test automation scripts (preferably with Javascript, Selenium Webdriver, JMeter, Cypress, Cucumber, and/or Appium) Good to have Java Language knowledge. Exposure to Security Testing

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

As an Oracle Data Integrator (ODI) Professional at YASH Technologies, you will play a key role in providing prompt and effective support, maintenance, and development on OBIA based Analytics Datawarehouse using ODI as the underlying ETL Tool. Your responsibilities will include implementation, development, and maintenance of the ODI environment, data warehouse design, dimensional modeling, ETL development & support, as well as ETL performance tuning. You will be responsible for solution design, implementation, migration, and support in the Oracle BI Tool stack, particularly ODI and SQL. Your tasks will involve ODI development in an OBIA Environment, enhancements, support, and performance tuning of SQL programs. Additionally, you will be involved in data warehouse design, development, and maintenance using Star Schema (Dimensional Modeling). Your role will also encompass production support of daily running ETL loads, monitoring, troubleshooting failures, bug fixing across environments, and working with various data sources including Oracle, CRM, Cloud, Flat Files, Sharepoint, and other non-Oracle systems. Experience in performance tuning of mappings in ODI and SQL query tuning will be essential. To succeed in this role, you should have 5-7+ years of relevant experience working in OBIA on ODI as the ETL tool in a BIAPPS environment. Strong written and oral communication skills, the ability to work in a demanding user environment, and knowledge of tools like Serena Business Manager and ServiceNow are crucial. A B.Tech / MCA qualification is required, and competencies such as being tech-savvy, effective communication, optimizing work processes, and cultivating innovation are essential. At YASH, you will have the opportunity to create a career path in an inclusive team environment that supports continuous learning and development. Our Hyperlearning workplace is grounded on principles such as flexible work arrangements, agile self-determination, trust, transparency, and stable employment with an ethical corporate culture. Join us at YASH Technologies and be part of a team that fosters positive changes in an ever-evolving virtual world.,

Posted 1 week ago

Apply

10.0 - 15.0 years

0 Lacs

maharashtra

On-site

The Oracle PLSQL Developer -TSYS Prime position in Mumbai requires 10 to 15 years of experience in Bank domain with TSYS PRIME experience. You must possess sound knowledge of TSYS PRIME and Oracle PLSQL language, along with APIs knowledge. Your responsibilities will include participating in all phases of SDLC, such as design, coding, code reviews, testing, and project documentation. You will also be required to coordinate with co-developers and other related departments. Desired skills and qualifications for this role include a strong understanding of TSYS PRIME, Oracle PL/SQL language, and APIs knowledge. You should have good exposure to Oracle advanced database concepts like Performance Tuning, indexing, Partitioning, and Data Modeling. Additionally, you will be responsible for database side development, implementation, and support, including solving daily Service requests, Incidents, and change requests. Experience in Code Review, Team Management, Effort Estimation, and Resource Planning will be beneficial for this role. If you are interested in this position, please apply by sending your resume to hr@techplusinfotech.com.,

Posted 1 week ago

Apply

6.0 - 10.0 years

8 - 12 Lacs

Bengaluru, Karnataka, India

On-site

Job Description Job Title: Data Modeller Experience: 6+ Years Location: Bangalore Work Mode: Onsite Job Role: We are seeking a skilled Data Modeller with expertise in designing data models for both OLTP and OLAP systems. The ideal candidate will have deep knowledge of data modelling principles and a strong understanding of database performance optimization, especially in near-real-time reporting environments. Prior experience with GCP databases and data modelling tools is essential. Responsibilities: ? Design and implement data models (Conceptual, Logical, and Physical) for complex business requirements ? Develop scalable OLTP and OLAP models to support enterprise data needs ? Optimize database performance through effective indexing, partitioning, and data sharding techniques ? Work closely with development and analytics teams to ensure alignment of models with application and reporting needs ? Use data modelling tools like Erwin, DBSchema, or similar to create and maintain models ? Implement best practices for data quality, governance, and consistency across systems ? Leverage GCP database solutions such as AlloyDB, CloudSQL, and BigQuery ? Collaborate with business stakeholders, especially within the mutual fund domain (preferred), to understand data requirements Requirements: ? 6+ years of hands-on experience in data modelling for OLTP and OLAP systems ? Strong command over data modelling fundamentals (Conceptual, Logical, Physical) ? In-depth knowledge of indexing, partitioning, and data sharding strategies ? Experience with real-time and near-real-time reporting systems ? Proficiency in data modelling tools preferably DBSchema or Erwin ? Familiarity with GCP databases like AlloyDB, CloudSQL, and BigQuery ? Functional understanding of the mutual fund industry is a plus ? Must be willing to work from Chennai office presence is mandatory Technical Skills: Data Modelling (Conceptual, Logical, Physical), OLTP, OLAP, Indexing, Partitioning, Data Sharding, Database Performance Tuning, Real-Time/Near-Real-Time Reporting, DBSchema, Erwin, AlloyDB, CloudSQL, BigQuery.

Posted 1 week ago

Apply

3.0 - 8.0 years

6 - 10 Lacs

Gurugram

Work from Office

Understands the process flow and the impact on the project module outcome. Works on coding assignments for specific technologies basis the project requirements and documentation available Debugs basic software components and identifies code defects. Focusses on building depth in project specific technologies. Expected to develop domain knowledge along with technical skills. Effectively communicate with team members, project managers and clients, as required. A proven high-performer and team-player, with the ability to take the lead on projects. Design and create S3 buckets and folder structures (raw, cleansed_data, output, script, temp-dir, spark-ui) Develop AWS Lambda functions (Python/Boto3) to download Bhav Copy via REST API and ingest into S3 Author and maintain AWS Glue Spark jobs to: partition data by scrip, year and month convert CSV to Parquet with Snappy compression Configure and run AWS Glue Crawlers to populate the Glue Data Catalog Write and optimize AWS Athena SQL queries to generate business-ready datasets Monitor, troubleshoot and tune data workflows for cost and performance Document architecture, code and operational runbooks Collaborate with analytics and downstream teams to understand requirements and deliver SLAs Technical Skills 3+ years hands-on experience with AWS data services (S3, Lambda, Glue, Athena) PostgreSQL basics Proficient in SQL and data partitioning strategies Experience with Parquet file formats and compression techniques (Snappy) Ability to configure Glue Crawlers and manage the AWS Glue Data Catalog Understanding of serverless architecture and best practices in security, encryption and cost control Good documentation, communication and problem-solving skills Nice-to-have skills Qualifications Qualifications 3-5 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

chennai, tamil nadu

On-site

Wipro Limited is a leading technology services and consulting company dedicated to developing innovative solutions that cater to the most complex digital transformation needs of clients. Our comprehensive range of consulting, design, engineering, and operational capabilities enables us to assist clients in achieving their most ambitious goals and establishing sustainable, future-ready businesses. With a global presence of over 230,000 employees and business partners spanning 65 countries, we remain committed to supporting our customers, colleagues, and communities in navigating an ever-evolving world. We are currently seeking an individual with hands-on experience in data modeling for both OLTP and OLAP systems. The ideal candidate should possess a deep understanding of Conceptual, Logical, and Physical data modeling, coupled with a robust grasp of indexing, partitioning, and data sharding, supported by practical experience. Experience in identifying and mitigating factors impacting database performance for near-real-time reporting and application interaction is essential. Proficiency in at least one data modeling tool, preferably DB Schema, is required. Additionally, functional knowledge of the mutual fund industry would be beneficial. Familiarity with GCP databases such as Alloy DB, Cloud SQL, and Big Query is preferred. The role demands willingness to work from our Chennai office, with a mandatory presence on-site at the customer site requiring five days of work per week. Cloud-PaaS-GCP-Google Cloud Platform is a mandatory skill set for this position. The successful candidate should have 5-8 years of relevant experience and should be prepared to contribute to the reimagining of Wipro as a modern digital transformation partner. We are looking for individuals who are inspired by reinvention - of themselves, their careers, and their skills. At Wipro, we encourage continuous evolution, reflecting our commitment to adapt to the changing world around us. Join us in a business driven by purpose, where you have the freedom to shape your own reinvention. Realize your ambitions at Wipro. We welcome applications from individuals with disabilities. For more information, please visit www.wipro.com.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

maharashtra

On-site

Job Description: We are looking for a skilled PySpark Developer having 4-5 or 2-3 years of experience to join our team. As a PySpark Developer, you will be responsible for developing and maintaining data processing pipelines using PySpark, Apache Spark's Python API. You will work closely with data engineers, data scientists, and other stakeholders to design and implement scalable and efficient data processing solutions. Bachelor's or Master's degree in Computer Science, Data Science, or a related field is required. The ideal candidate should have strong expertise in the Big Data ecosystem including Spark, Hive, Sqoop, HDFS, Map Reduce, Oozie, Yarn, HBase, Nifi. The candidate should be below 35 years of age and have experience in designing, developing, and maintaining PySpark data processing pipelines to process large volumes of structured and unstructured data. Additionally, the candidate should collaborate with data engineers and data scientists to understand data requirements and design efficient data models and transformations. Optimizing and tuning PySpark jobs for performance, scalability, and reliability is a key responsibility. Implementing data quality checks, error handling, and monitoring mechanisms to ensure data accuracy and pipeline robustness is crucial. The candidate should also develop and maintain documentation for PySpark code, data pipelines, and data workflows. Experience in developing production-ready Spark applications using Spark RDD APIs, Data frames, Datasets, Spark SQL, and Spark Streaming is required. Strong experience of HIVE Bucketing and Partitioning, as well as writing complex hive queries using analytical functions, is essential. Knowledge in writing custom UDFs in Hive to support custom business requirements is a plus. If you meet the above qualifications and are interested in this position, please email your resume, mentioning the position applied for in the subject column at: careers@cdslindia.com.,

Posted 1 week ago

Apply

7.0 - 12.0 years

12 - 18 Lacs

Pune, Bengaluru

Hybrid

> Strong programming expertise in PySpark and Python. > Solid understanding of Spark internals, DAG optimization, partitioning, broadcast joins, etc. > Hands-on experience with one or more cloud platforms > Experience with API integrations Required Candidate profile The ideal candidate has strong expertise in PySpark optimization, API integration, and big data ingestion using AWS, GCP, or Azure. A solid foundation in SQL.

Posted 1 week ago

Apply

3.0 - 8.0 years

6 - 10 Lacs

Gurugram

Work from Office

Role Description Understands the process flow and the impact on the project module outcome. Works on coding assignments for specific technologies basis the project requirements and documentation available Debugs basic software components and identifies code defects. Focusses on building depth in project specific technologies. Expected to develop domain knowledge along with technical skills. Effectively communicate with team members, project managers and clients, as required. A proven high-performer and team-player, with the ability to take the lead on projects. Design and create S3 buckets and folder structures (raw, cleansed_data, output, script, temp-dir, spark-ui) Develop AWS Lambda functions (Python/Boto3) to download Bhav Copy via REST API and ingest into S3 Author and maintain AWS Glue Spark jobs to: partition data by scrip, year and month convert CSV to Parquet with Snappy compression Configure and run AWS Glue Crawlers to populate the Glue Data Catalog Write and optimize AWS Athena SQL queries to generate business-ready datasets Monitor, troubleshoot and tune data workflows for cost and performance Document architecture, code and operational runbooks Collaborate with analytics and downstream teams to understand requirements and deliver SLAs Technical Skills 3+ years hands-on experience with AWS data services (S3, Lambda, Glue, Athena) PostgreSQL basics Proficient in SQL and data partitioning strategies Experience with Parquet file formats and compression techniques (Snappy) Ability to configure Glue Crawlers and manage the AWS Glue Data Catalog Understanding of serverless architecture and best practices in security, encryption and cost control Good documentation, communication and problem-solving skills Nice-to-have skills Qualifications Qualifications 3-5 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred

Posted 2 weeks ago

Apply

2.0 - 4.0 years

3 - 6 Lacs

Pune

Work from Office

Key Purpose of this Role: BiofuelCircles products and services are used by businesses and individuals in the bioenergy supply chain: from large industries to rural enterprises, to transporters, service providers and farmers. The Database Developer will be responsible for designing, developing, managing and enhancing the database as per feature requirements, to ensure stability, reliability, and performance. What will you do every day: Collaborate with the software development team to write, debug, optimise and fine-tune SQL queries, stored procedures (SPs), functions, and views Analyse slow-performing queries and improve execution time using indexing, partitioning, and query refactoring Monitor and troubleshoot deadlocks, blocking, and performance bottlenecks in the database Implement query execution plan analysis and other such tooling to enhance database performance. Advise the software development team to implement best practices in database design, indexing strategies, and data normalisation. Conduct database profiling and performance audits to proactively improve system efficiency Maintain data integrity, security, and backup strategies The ideal candidate profile: You have at least 3 years of experience in SQL development and performance tuning You have hands-on experience in query optimisation techniques (including execution plans, indexing, caching, partitioning, etc.) You are analytical, with demonstrable problem-solving skills, and can troubleshoot performance issues arising out of locks, deadlocks and long-running queries You have strong knowledge of MS SQL and database management concepts You are comfortable writing complex SQL queries, stored procedures, triggers and functions You are familiar with SQL Server Profiler, DMVs or similar performance monitoring tools You do not get overwhelmed by large datasets and high-traffic database environments You can coordinate, follow up, follow through and drive matters to closure proactively You take ownership and accountability and don't need a manager reminding of tasks or deadlines You can work with cross functional and remote teams Certification in or knowledge of managing cloud databases (Azure SQL) will give you an advantage over other applicants Knowledge of ETL processes and data warehousing will also give you an advantage over other applicants To Apply: https://app.dover.com/apply/BiofuelCircle/0bcfe83e-40b6-4549-85a0-b41d26ebb945?rs=56176124

Posted 2 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

chennai, tamil nadu

On-site

You are a Database Performance & Data Modeling Specialist with a primary focus on optimizing schema structures, tuning SQL queries, and ensuring that data models are well-prepared for high-volume, real-time systems. Your responsibilities include designing data models that balance performance, flexibility, and scalability, conducting performance benchmarking to identify bottlenecks and propose improvements, analyzing slow queries to recommend indexing, denormalization, or schema revisions, monitoring query plans, memory usage, and caching strategies for cloud databases, and collaborating with developers and analysts to optimize application-to-database workflows. You must possess strong experience in database performance tuning, especially in GCP platforms like BigQuery, CloudSQL, and AlloyDB. Proficiency in schema refactoring, partitioning, clustering, and sharding techniques is essential. Familiarity with profiling tools, slow query logs, and GCP monitoring solutions is required, along with SQL optimization skills including query rewriting and execution plan analysis. Preferred skills include a background in mutual fund or high-frequency financial data modeling, hands-on experience with relational databases like PostgreSQL, MySQL, distributed caching, materialized views, and hybrid model structures. Soft skills that are crucial for this role include being precision-driven with an analytical mindset, a clear communicator with attention to detail, and possessing strong problem-solving and troubleshooting abilities. By joining this role, you will have the opportunity to shape high-performance data systems from the ground up, play a critical role in system scalability and responsiveness, and work with high-volume data in a cloud-native enterprise setting.,

Posted 2 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

chennai, tamil nadu

On-site

As a Data Architect specializing in OLTP & OLAP Systems, you will play a crucial role in designing, optimizing, and governing data models for both OLTP and OLAP environments. Your responsibilities will include architecting end-to-end data models across different layers, defining conceptual, logical, and physical data models, and collaborating closely with stakeholders to capture functional and performance requirements. You will need to optimize database structures for real-time and analytical workloads, enforce data governance, security, and compliance best practices, and enable schema versioning, lineage tracking, and change control. Additionally, you will review query plans and indexing strategies to enhance performance. To excel in this role, you must possess a deep understanding of OLTP and OLAP systems architecture, along with proven experience in GCP databases such as BigQuery, CloudSQL, and AlloyDB. Your expertise in database tuning, indexing, sharding, and normalization/denormalization will be critical, as well as proficiency in data modeling tools like DBSchema, ERWin, or equivalent. Familiarity with schema evolution, partitioning, and metadata management is also required. Experience in the BFSI or mutual fund domain, knowledge of near real-time reporting and streaming analytics architectures, and familiarity with CI/CD for database model deployments are preferred skills that will set you apart. Strong communication, stakeholder management, strategic thinking, and the ability to mentor data modelers and engineers are essential soft skills for success in this position. By joining our team, you will have the opportunity to own the core data architecture for a cloud-first enterprise, bridge business goals with robust data design, and work with modern data platforms and tools. If you are looking to make a significant impact in the field of data architecture, this role is perfect for you.,

Posted 2 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

chennai, tamil nadu

On-site

You are looking for a Data Modelling Consultant with 6 to 9 years of experience to work in Chennai office. As a Data Modelling Consultant, your role will involve providing end-to-end modeling support for OLTP and OLAP systems hosted on Google Cloud. Your responsibilities will include designing and validating conceptual, logical, and physical models for cloud databases, translating requirements into efficient schema designs, and supporting data model reviews, tuning, and implementation. You will also guide teams on best practices for schema evolution, indexing, and governance to enable usage of models in real-time applications and analytics platforms. To succeed in this role, you must have strong experience in modeling across OLTP and OLAP systems, hands-on experience with GCP tools like BigQuery, CloudSQL, and AlloyDB, and the ability to understand business rules and translate them into scalable structures. Additionally, familiarity with partitioning, sharding, materialized views, and query optimization is essential. Preferred skills for this role include experience with BFSI or financial domain data schemas, familiarity with modeling methodologies and standards such as 3NF and star schema. Soft skills like excellent stakeholder communication, collaboration, strategic thinking, and attention to scalability are also important. Joining this role will allow you to deliver advisory value across critical data initiatives, influence the modeling direction for a data-driven organization, and be at the forefront of GCP-based enterprise data transformation.,

Posted 2 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

chennai, tamil nadu

On-site

As a Cloud Data Architect specializing in BigQuery and CloudSQL at our Chennai office, you will play a crucial role in leading the design and implementation of scalable, secure, and high-performing data architectures using Google Cloud technologies. Your expertise will be essential in shaping architectural direction and ensuring that data solutions meet enterprise-grade standards. Your responsibilities will include designing data architectures that align with performance, cost-efficiency, and scalability needs, implementing data models, security controls, and access policies across GCP platforms, leading cloud database selection, schema design, and tuning for analytical and transactional workloads, collaborating with DevOps and DataOps teams to deploy and manage data environments, ensuring best practices for data governance, cataloging, and versioning, and enabling real-time and batch integrations using GCP-native tools. To excel in this role, you must possess deep knowledge of BigQuery, CloudSQL, and the GCP data ecosystem, along with strong experience in schema design, partitioning, clustering, and materialized views. Hands-on experience in implementing data encryption, IAM policies, and VPC configurations is crucial, as well as an understanding of hybrid and multi-cloud data architecture strategies and data lifecycle management. Proficiency in GCP cost optimization is also required. Preferred skills for this role include experience with AlloyDB, Firebase, or Spanner, familiarity with LookML, dbt, or DAG-based orchestration tools, and exposure to the BFSI domain or financial services architecture. In addition to technical skills, soft skills such as visionary thinking with practical implementation skills, strong communication, and cross-functional leadership are highly valued. Previous experience guiding data strategy in enterprise settings will be advantageous. Joining our team will give you the opportunity to own data architecture initiatives in a cloud-native ecosystem, drive innovation through scalable and secure GCP designs, and collaborate with forward-thinking data and engineering teams. Skills required for this role include IAM policies, Spanner, cloud, schema design, data architecture, GCP data ecosystem, dbt, GCP cost optimization, data, AlloyDB, data encryption, data lifecycle management, BigQuery, LookML, VPC configurations, partitioning, clustering, materialized views, DAG-based orchestration tools, Firebase, and CloudSQL.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

10 - 20 Lacs

Gurugram

Work from Office

About Intellismith Intellismith, is a dynamic HR service and technology startup. Our mission is to tackle India's employability challenges head-on. Currently, we operate two key lines of business: recruiting and outsourcing. With teams based in Noida, Chennai, Mumbai, and Bangalore, we collaborate with top brands in the BFSI and IT sectors. . As a leading outsourcing partners, we are hiring Data Analyst to work on a project for our client, which is the largest provider of telecoms and mobile money services in 14 countries spanning Sub-Saharan, Central, and Western Africa. Job Details Experience Required: 4+ years in SQL, Big Data, Hive, database Qualification: BE/B.Tech/Graduation in a computer-related field Location: Gurugram (WFO) Notice Period: Immediate to 15 days (Candidates with a notice period of less than 30 days are preferred) Primary Skill: RDBMS Structure query language(SQL) Hive tables and query Big Data eco system SQL Developer Responsibilities Designing, developing, and maintaining databases Writing complex SQL queries for data retrieval and manipulation on RDBS and Big Data eco system Optimizing database performance and ensuring data integrity Build appropriate and useful reporting deliverables. Analyze existing SQL queries for performance improvements. Troubleshooting and resolving database-related issues Collaborating with cross-functional teams to gather requirements and implement solutions Creating and maintaining database documentation Implementing and maintaining database security measures SQL Developer Qualifications Strong proficiency in SQL and database concepts Good experience working on hive tables, trio query and big data eco system for data retrieval Experience with database development tools and technologies like Oracle, PostgresSQL, etc. Familiarity with performance tuning and query optimization Knowledge of data modeling and database design principles

Posted 2 weeks ago

Apply

3.0 - 5.0 years

2 - 5 Lacs

Mumbai, Maharashtra, India

On-site

Key Responsibilities: IBM Data Management Administration: Administer and support IBM Db2 and other IBM data management systems (e.g., InfoSphere , IBM Netezza ). Ensure database performance and reliability by optimizing database design, data models , queries , and indexes . Perform installation , configuration , and patching of IBM data management tools and databases. Monitor database health and implement proactive solutions for backup , disaster recovery , and high availability . Database Performance Optimization: Use IBM tools and technologies to tune database performance , focusing on response time, query execution time, and overall system throughput. Identify and resolve database performance bottlenecks through the use of indexing , query optimization , and partitioning strategies. Conduct regular performance assessments and implement improvements based on system load and usage trends. Data Modeling and Data Management: Work closely with business and data teams to design and implement logical and physical data models to meet business requirements. Ensure the integrity and consistency of data across all systems through proper data governance and data quality management practices. Implement and maintain data transformation , integration , and ETL (Extract, Transform, Load) processes using IBM tools like InfoSphere DataStage . Backup and Disaster Recovery: Design and manage backup strategies for IBM data systems, ensuring business continuity. Configure and validate disaster recovery and business continuity processes, minimizing the risk of data loss. Regularly test recovery procedures to ensure quick and efficient restoration of services. Security and Compliance: Implement security best practices within IBM databases, including access control, user permissions, encryption, and audit logging. Ensure compliance with industry-specific data security and privacy regulations (e.g., GDPR , HIPAA ). Perform data masking and data anonymization as part of sensitive data management processes. Automation and Scripting: Develop and maintain scripts for automating routine database management tasks using Shell , Python , PowerShell , or IBM-specific tools. Integrate automation solutions for backup, monitoring, patching, and reporting tasks to improve operational efficiency. Troubleshooting and Support: Provide technical support for issues related to IBM Data Management systems , including performance issues, outages, and system upgrades. Troubleshoot complex database issues related to data access, integrity, and application compatibility. Assist development teams with resolving SQL performance and database-related issues. Documentation and Reporting: Maintain detailed and up-to-date documentation for database configurations, procedures, and IBM Data Management systems . Generate periodic reports on database performance, health, and security compliance. Create and maintain operational guides for end-users, developers, and other stakeholders. Collaboration and Knowledge Sharing: Collaborate with cross-functional teams, including development , data analytics , and operations , to ensure smooth integration and data flow across platforms. Mentor and provide training for junior database administrators and team members on best practices for IBM data management. Share insights and knowledge about new features, best practices, and tools within the IBM data ecosystem. Required Qualifications: Bachelor's degree in Computer Science , Information Technology , Data Engineering , or a related field. 3-5 years of experience in IBM Data Management , including IBM Db2 , InfoSphere , Netezza , or similar IBM data technologies. Strong understanding of database management , including backup/recovery , performance tuning , and high availability concepts. Proficiency in SQL for querying, scripting, and troubleshooting. Experience in data integration and ETL processes using IBM tools (e.g., InfoSphere DataStage ). Familiarity with data governance , data security , and compliance standards in database management. Experience in Linux/Unix and Windows environments for administering databases. Solid understanding of database indexing , query optimization , and partitioning .

Posted 2 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

chennai, tamil nadu

On-site

As a Data Modeller specializing in GCP and Cloud Databases, you will play a crucial role in designing and optimizing data models for both OLTP and OLAP systems. Your expertise in cloud-based databases, data architecture, and modeling will be essential in collaborating with engineering and analytics teams to ensure efficient operational systems and real-time reporting pipelines. You will be responsible for designing conceptual, logical, and physical data models tailored for OLTP and OLAP systems. Your focus will be on developing and refining models that support performance-optimized cloud data pipelines, implementing models in BigQuery, CloudSQL, and AlloyDB, as well as designing schemas with indexing, partitioning, and data sharding strategies. Translating business requirements into scalable data architecture and schemas will be a key aspect of your role, along with optimizing for near real-time ingestion, transformation, and query performance. You will utilize tools like DBSchema for collaborative modeling and documentation while creating and maintaining metadata and documentation around models. In terms of required skills, hands-on experience with GCP databases (BigQuery, CloudSQL, AlloyDB), a strong understanding of OLTP and OLAP systems, and proficiency in database performance tuning are essential. Additionally, familiarity with modeling tools such as DBSchema or ERWin, as well as a proficiency in SQL, schema definition, and normalization/denormalization techniques, will be beneficial. Preferred skills include functional knowledge of the Mutual Fund or BFSI domain, experience integrating with cloud-native ETL and data orchestration pipelines, and familiarity with schema version control and CI/CD in a data context. In addition to technical skills, soft skills such as strong analytical and communication abilities, attention to detail, and a collaborative approach across engineering, product, and analytics teams are highly valued. Joining this role will provide you with the opportunity to work on enterprise-scale cloud data architectures, drive performance-oriented data modeling for advanced analytics, and collaborate with high-performing cloud-native data teams.,

Posted 2 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

pune, maharashtra

On-site

You should possess a Bachelor's degree in Computer Science, Engineering, or a related field along with at least 8 years of work experience in Data First systems. Additionally, you should have a minimum of 4 years of experience working on Data Lake/Data Platform projects specifically on AWS/Azure. It is crucial to have extensive knowledge and hands-on experience with Data warehousing tools such as Snowflake, BigQuery, or RedShift. Proficiency in SQL for managing and querying data is a must-have skill for this role. You are expected to have experience with relational databases like Azure SQL, AWS RDS, as well as an understanding of NoSQL databases like MongoDB for handling various data formats and structures. Familiarity with orchestration tools like Airflow and DBT would be advantageous. Experience in building stream-processing systems using solutions such as Kafka or Azure Event Hub is desirable. Your responsibilities will include designing and implementing ETL/ELT processes using tools like Azure Data Factory to ingest and transform data into the data lake. You should also have expertise in data migration and processing with AWS (S3, Glue, Lambda, Athena, RDS Aurora) or Azure (ADF, ADLS, Azure Synapse, Databricks). Data cleansing and enrichment skills are crucial to ensure data quality for downstream processing and analytics. Furthermore, you must be capable of managing schema evolution and metadata for the data lake, with experience in tools like Azure Purview for data discovery and cataloging. Proficiency in creating and managing APIs for data access, preferably with experience in JDBC/ODBC, is required. Knowledge of data governance practices, data privacy laws like GDPR, and implementing security measures in the data lake are essential aspects of this role. Strong programming skills in languages like Python, Scala, or SQL are necessary for data engineering tasks. Additionally, experience with automation and orchestration tools, familiarity with CI/CD practices, and the ability to optimize data storage and retrieval for analytical queries are key requirements. Collaboration with the Principal Data Architect and other team members to align data solutions with architectural and business goals is crucial. As a lead, you will be responsible for critical system design changes, software projects, and ensuring timely project deliverables. Collaboration with stakeholders to translate business needs into efficient data infrastructure systems is a key aspect of this role. Your ability to review design proposals, conduct code review sessions, and promote best practices is essential. Experience in an Agile model, delivering quality deliverables on time, and translating complex requirements into technical solutions are also part of your responsibilities.,

Posted 2 weeks ago

Apply

5.0 - 10.0 years

2 - 5 Lacs

Chennai, Bengaluru

Work from Office

Job Title:Data EngineerExperience5-10YearsLocation:Chennai, Bangalore : Minimum 5+ years of development and design experience in Informatica Big Data Management Extensive knowledge on Oozie scheduling, HQL, Hive, HDFS (including usage of storage controllers) and data partitioning. Extensive experience working with SQL and NoSQL databases. Linux OS configuration and use, including shell scripting. Good hands-on experience with design patterns and their implementation. Well versed with Agile, DevOps and CI/CD principles (GitHub, Jenkins etc.), and actively involved in solving, troubleshooting issues in distributed services ecosystem. Familiar with Distributed services resiliency and monitoring in a production environment. Experience in designing, building, testing, and implementing security systems Including identifying security design gaps in existing and proposed architectures and recommend changes or enhancements. Responsible for adhering to established policies, following best practices, developing, and possessing an in-depth Understanding of exploits and vulnerabilities, resolving issues by taking the appropriate corrective action. Knowledge on security controls designing Source and Data Transfers including CRON, ETLs, and JDBC-ODBC scripts. Understand basics of Networking including DNS, Proxy, ACL, Policy, and troubleshooting High level knowledge of compliance and regulatory requirements of data including but not limited to encryption, anonymization, data integrity, policy control features in large scale infrastructures. Understand data sensitivity in terms of logging, events and in memory data storage such as no card numbers or personally identifiable data in logs. Implements wrapper solutions for new/existing components with no/minimal security controls to ensure compliance to bank standards.

Posted 2 weeks ago

Apply

3.0 - 8.0 years

6 - 16 Lacs

Bengaluru

Remote

Hiring for USA based big MNC, Looking for a detail-oriented and experienced SQL Developer to join our team. The ideal candidate will be responsible for developing, maintaining, and optimizing SQL databases and writing complex queries to ensure data accessibility and integrity. You will work closely with data analysts, software engineers, and business teams to support various data-driven projects. Design, create, and maintain scalable databases Write complex SQL queries, stored procedures, triggers, functions, and views Optimize existing queries for performance and maintainability Perform data extraction, transformation, and loading (ETL) Monitor database performance, implement changes, and apply new patches and versions when required Ensure database security, integrity, stability, and system availability Work with application developers to integrate database logic with applications Troubleshoot and resolve data issues in a timely manner Generate reports and data visualizations for stakeholders as needed Strong proficiency in SQL and experience with relational database systems such as MySQL, SQL Server, PostgreSQL, or Oracle Experience in writing and debugging stored procedures, functions, and complex queries Understanding of data warehousing concepts and ETL processes Familiarity with database design, normalization, and indexing Knowledge of performance tuning and query optimization Experience with reporting tools such as SSRS, Power BI, or Tableau is a plus Good understanding of data governance, security, and compliance Excellent analytical and problem-solving skills Strong communication and teamwork abilities

Posted 2 weeks ago

Apply
Page 1 of 3
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies