Home
Jobs
Companies
Resume

106 Oltp Jobs - Page 2

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

7 - 12 Lacs

Noida

Work from Office

Naukri logo

Hello! Youve landed on this page, which means youre interested in working with us. Lets take a sneak peek at what its like to work at Innovaccer. Engineering at Innovaccer With every line of code, we accelerate our customers success, turning complex challenges into innovative solutions. Collaboratively, we transform each data point we gather into valuable insights for our customers. Join us and be part of a team thats turning dreams of better healthcare into reality, one line of code at a time. Together, we re shaping the future and making a meaningful impact on the world. About the Role The technology that once promised to simplify patient care has brought more issues than anyone ever anticipated. At Innovaccer, we defeat this beast by making full use of all the data Healthcare has worked so hard to collect, and replacing long-standing problems with ideal solutions. Data is our bread and butter for innovation. We are looking for a Software Development Engineer - II within the analytics team who can help us build the next generation of dashboards, reports, and other analytics for our customers in the provider/payer market. A Day in the Life Begin the day by reviewing overnight alerts and health dashboards for MongoDB, Elasticsearch, and Redis clusters, ensuring system uptime and performance SLAs are met. Attend a morning sync with the platform engineering and AI enablement teams to align on current incidents, infrastructure changes, and in-flight automation projects. Develop or refine APIs that expose key database metrics and status information to internal AI agents, enabling proactive issue detection and self-healing workflows. Work on infrastructure-as-code scripts (e.g., Terraform, Helm, Ansible) to provision or update HA database environments across Kubernetes and cloud-native platforms. Collaborate with AI/ML engineers to build lightweight agents that can auto-scale resources, detect slow queries, or predict cache evictions using logs and telemetry from Redis or Elasticsearch. Troubleshoot real-time issues like replication lag in MongoDB or index bloat in Elasticsearch, using monitoring tools and custom-built internal dashboards. Document operational runbooks and reliability patterns, while contributing to a shared knowledge base for SREs, developers, and future AI agents to use. Wrap up by running a simulation or test scenario for a self-recovery agent (e.g., automated failover handler), validating that it performs as expected under load or failure conditions What You Need Functional :- Reliability Mindset: Passion for building resilient, self-healing database systems with minimal manual intervention. Problem Solving: Strong analytical skills to diagnose and resolve issues across distributed data systems in real-time. Collaboration: Proven ability to work cross-functionally with engineering, SRE, and AI/ML teams to deliver scalable infrastructure solutions. Documentation Runbooks: Experience creating operational playbooks, troubleshooting guides, and automation documentation. Incident Response: Familiarity with on-call rotations, root cause analysis (RCA), and post-incident reviews in a high- availability environment. Process Improvement: Ability to identify and improve inefficiencies in database and cache workflows through tooling or automation. Technical Skill Sets :- Strong Database knowledge and work experience in dealing with multi terabyte and high concurrent work loads 5+ years of experience related to MPP and/Or Columnar database platforms like Redshift , Azure , Snowflake etc 4 years of experience with traditional OLTP database platforms like Postgres ,MS SQL server etc Database Expertise: Hands-on experience managing, tuning, and scaling MongoDB, Redis, and Elasticsearch in production environments. API Development: Proficiency in developing and consuming RESTful APIs, especially for telemetry, alerting, or agent control functions. Automation IaC: Working knowledge of tools like Terraform, Ansible, Helm, and scripting in Python, Bash, or Go. Monitoring Observability: Familiar with tools like Prometheus, Grafana, ELK stack, Datadog, or OpenTelemetry for database observability. AI/Agent Integration: Exposure to building or supporting AI-powered automation agents for predictive alerting, anomaly detection, or auto-scaling behaviors. Security Compliance: Understanding of secure key management, database encryption, audit logging, and role- based access control (RBAC). Cloud Kubernetes: Experience deploying and operating database workloads in AWS, Azure, or GCP with Kubernetes orchestration. Here s What We Offer Generous Leave Benefits : Enjoy generous leave benefits of up to 40 days. Parental Leave : Experience one of the industrys best parental leave policies to spend time with your new addition. Sabbatical Leave Policy : Want to focus on skill development, pursue an academic career, or just take a breakWeve got you covered. Health Insurance : We offer health benefits and insurance to you and your family for medically related expenses related to illness, disease, or injury. Pet-Friendly Office *: Spend more time with your treasured friends, even when youre away from home. Bring your furry friends with you to the office and let your colleagues become their friends, too. *Noida office only Creche Facility for children *: Say goodbye to worries and hello to a convenient and reliable creche facility that puts your childs well-being first. *India offices Where and how we work Our Noida office is situated in a posh techspace, equipped with various amenities to support our work environment. Here, we follow a five-day work schedule, allowing us to efficiently carry out our tasks and collaborate effectively within our team. Innovaccer is an equal opportunity employer. We celebrate diversity, and we are committed to fostering an inclusive and diverse workplace where all employees, regardless of race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, marital status, or veteran status, feel valued and empowered. Disclaimer: Innovaccer does not charge fees or require payment from individuals or agencies for securing employment with us. We do not guarantee job spots or engage in any financial transactions related to employment. If you encounter any posts or requests asking for . Additionally, please exercise caution and verify the authenticity of any requests before disclosing personal and confidential information, including bank account details. About Innovaccer Innovaccer Inc. is the data platform that accelerates innovation. The Innovaccer platform unifies patient data across systems and care settings and empowers healthcare organizations with scalable, modern applications that improve clinical, financial, operational, and experiential outcomes. Innovaccer s EHR-agnostic solutions have been deployed across more than 1,600 hospitals and clinics in the US, enabling care delivery transformation for more than 96,000 clinicians, and helping providers work collaboratively with payers and life sciences companies. Innovaccer has helped its customers unify health records for more than 54 million people and generate over $1.5 billion in cumulative cost savings. The Innovaccer platform is the #1 rated Best-in-KLAS data and analytics platform by KLAS, and the #1 rated population health technology platform by Black Book. For more information, please visit innovaccer.com . Check us out on YouTube , Glassdoor , LinkedIn , and innovaccer.com .

Posted 3 weeks ago

Apply

4.0 - 9.0 years

5 - 9 Lacs

Hyderabad

Work from Office

Naukri logo

Manage SQL Server databases through multiple product lifecycle environments, from development to mission-critical production systems. Installation / Upgradation / Migration of SQL Servers Apply any patches or service packs for SQL Server Configure High Availability/Disaster recovery solutions Configure automated alerts to send a mail to team if any issues Work on incidents, change tickets and problem tickets Setup Maintenance plans for Backups, Database integrity check, rebuild index and Update statistics and make sure to run all the jobs without any issues Troubleshoot locking issues, including deadlocks, blocking, lock timeouts and fix them Join bridge calls to troubleshoot and fix P1(SEV1) and P2 (SEV2) incidents Work closely with application developers to test, configure, and optimize servers based on application demands Handle database object deployment to Integration, QA, UAT, and Production Provide day to day support of high OLTP HA of SQL Server databases- Identify root causes of production problems and work with developers to resolve such problems Work on performance tuning like creating missing indexes Configure and maintain database servers and processes, including monitoring of system health and performance, to ensure high levels of performance, availability, and security Apply data modeling techniques to ensure development and implementation support efforts meet integration and performance expectations Independently analyze, solve, and correct issues in real time, providing problem resolution end-to-end. Refine and automate regular processes, track issues, and document changes Assist developers with complex query tuning and schema refinement. Provide 24x7 support for critical production systems

Posted 3 weeks ago

Apply

10.0 - 15.0 years

2 - 6 Lacs

Bengaluru

Work from Office

Naukri logo

Job Title :Senior SQL Developer Experience 10 -15Years Location :Bangalore ExperienceMinimum of 10+ years in database development and management roles. SQL MasteryAdvanced expertise in crafting and optimizing complex SQL queries and scripts. AWS RedshiftProven experience in managing, tuning, and optimizing large-scale Redshift clusters. PostgreSQLDeep understanding of PostgreSQL, including query planning, indexing strategies, and advanced tuning techniques. Data PipelinesExtensive experience in ETL development and integrating data from multiple sources into cloud environments. Cloud ProficiencyStrong experience with AWS services like ECS, S3, KMS, Lambda, Glue, and IAM. Data ModelingComprehensive knowledge of data modeling techniques for both OLAP and OLTP systems. ScriptingProficiency in Python, C#, or other scripting languages for automation and data manipulation. Preferred Qualifications LeadershipPrior experience in leading database or data engineering teams. Data VisualizationFamiliarity with reporting and visualization tools like Tableau, Power BI, or Looker. DevOpsKnowledge of CI/CD pipelines, infrastructure as code (e.g., Terraform), and version control (Git). CertificationsAny relevant certifications (e.g., AWS Certified Solutions Architect, AWS Certified Database - Specialty, PostgreSQL Certified Professional) will be a plus. Azure DatabricksFamiliarity with Azure Databricks for data engineering and analytics workflows will be a significant advantage. Soft Skills Strong problem-solving and analytical capabilities. Exceptional communication skills for collaboration with technical and non-technical stakeholders. A results-driven mindset with the ability to work independently or lead within a team. Qualification: Bachelor's or masters degree in Computer Science, Information Systems, Engineering or equivalent. 10+ years of experience

Posted 3 weeks ago

Apply

8.0 - 12.0 years

7 - 11 Lacs

Bengaluru

Work from Office

Naukri logo

Job Position Python Lead Total Exp Required 6+ years Relevant Exp Required around 5 Mandatory skills required Strong Python coding and development Good to have skills required Cloud, SQL , data analysis skills Location Pune - Kharadi - WFO - 3 days/week. About The Role : We are seeking a highly skilled and experienced Python Lead to join our team. The ideal candidate will have strong expertise in Python coding and development, along with good-to-have skills in cloud technologies, SQL, and data analysis. Key Responsibilities : - Lead the development of high-quality, scalable, and robust Python applications. - Collaborate with cross-functional teams to define, design, and ship new features. - Ensure the performance, quality, and responsiveness of applications. - Develop RESTful applications using frameworks like Flask, Django, or FastAPI. - Utilize Databricks, PySpark SQL, and strong data analysis skills to drive data solutions. - Implement and manage modern data solutions using Azure Data Factory, Data Lake, and Data Bricks. Mandatory Skills : - Proven experience with cloud platforms (e.g. AWS) - Strong proficiency in Python, PySpark, R, and familiarity with additional programming languages such as C++, Rust, or Java. - Expertise in designing ETL architectures for batch and streaming processes, database technologies (OLTP/OLAP), and SQL. - Experience with the Apache Spark, and multi-cloud platforms (AWS, GCP, Azure). - Knowledge of data governance and GxP data contexts; familiarity with the Pharma value chain is a plus. Good to Have Skills : - Experience with modern data solutions via Azure. - Knowledge of principles summarized in the Microsoft Cloud Adoption Framework. - Additional expertise in SQL and data analysis. Educational Qualifications Bachelor's/Master's degree or equivalent with a focus on software engineering. If you are a passionate Python developer with a knack for cloud technologies and data analysis, we would love to hear from you. Join us in driving innovation and building cutting-edge solutions! Apply Insights Follow-up Save this job for future reference Did you find something suspiciousReport Here! Hide This Job Click here to hide this job for you. You can also choose to hide all the jobs from the recruiter.

Posted 3 weeks ago

Apply

10.0 - 20.0 years

20 - 35 Lacs

Pune, Chennai, Bangalore/Bengaluru

Work from Office

Naukri logo

Job Title: Microsoft Technologies -- Project Manager/Service Delivery Manager x2 Offshore -- Job Location: Mumbai / Bangalore / Hyderabad / Chennai / Pune / Delhi Onsite Project Location: -- Multiple Locations Dubai - UAE Riyadh - Saudi Arabia Doha - Qatar Please note: You need to travel to onsite on needful basis. Type of job: In Office only , NO Remote Salary: INR. 20 Lakhs - INR.40 Laksh per annum [ Depending on Experience ] Experience Level Needed: 10 Years or above You: - Must have Microsoft products / services -- delivery / Implementations experience - Must have at least 10 Years IT experience in delivering IT Projects with Microsoft Technologies in Banking and Financial Services (BFSI) Vertical / managing the BFSI projects / IT services - Must have at least 3 years as Microsoft Service Delivery Manager / Delivery Manager - Must have at least USD: $1M - $10M projects handled delivery history Job Responsibilities:- - Make sure all Microsoft Products related services / project deliverables are in place for service delivery - Make sure all Microsoft projects are running as per client's project plan(s) - Identify IT services opportunities in the Microsoft Services vertical in IT Consulting / IT services - Recruiting the right candidates for Microsoft projects for both onsite / offshore projects - Able to write bidding documents - BFSI projects /new RFPs/ contracts/resourcing / delivery - Willing to travel to client offices / other delivery centers on needful basis - Sound business / functional knowledge in Microsoft Business Intelligence products Nice to have: Certification in one or more of the following: PMP / PRINCE2 / MSP / P3M3 / Agile / APMPMQ / Lean 6 Sigma Business Verticals / Functional Domains: - Capital Markets / IT Services - Banking and Financial Services [ Retail Banking / Loans and Mortgages ] and others - Capital Markets / Stock Markets / Forex Trading - Insurance - Credit Cards Authorization / Clearing and Settlement - Oil and Gas - Telecom - Supply Chain / Logistics - Travel and Hospitality - Healthcare No.of positions: 02 Email spectrumconsulting1985@gmail.com Job Ref code: MS_SDM_0525 If you are interested, please email your cv as ATTACHMENT with job ref. code [ MS_SDM_0525 ] as subject & Please mention your availability for interview

Posted 3 weeks ago

Apply

6.0 - 11.0 years

0 - 0 Lacs

Bengaluru

Work from Office

Naukri logo

Database development environment developing the OLTP system.Data migration and data integration activities.Data modeling using ERWin tool is preferred.Experience in Oracle,Postgresql databases∈ writing the stored procedures, function, triggers Required Candidate profile Experience in AWS cloud environment ,basic database administration activities, DWH/BI environment with dimensional modeling skills, Knowledge in Snowflake is a big plus

Posted 3 weeks ago

Apply

5.0 - 10.0 years

12 - 22 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Naukri logo

Job Title: ======= Senior MS BI Developer Onsite Location: ============= Dubai, UAE Doha , Qatar Riyadh, Saudi Onsite Monthly Salary: ============== 10k AED - 15k AED - Full tax free salary , Depending on Experience Gulf work permit will be sponsored by our client Project duration: ============= 2 Years, Extendable Desired Experience Level Needed: =========================== 5 - 10 Years Qualification: ========== B.Tech / M.Tech / MCA / M.Sc or equivalent Experience Needed: =============== Over all: 5 or more Years of total IT experience Solid 3+ Years experience or MS - BI Developer with Microsoft Stack / MS - DWH Engineer Job Responsibilities: ================ - Design and develop DWH data flows - Able to build SCD -1 / SCD - 2 / SCD -3 dimensions - Build Cubes - Maintain SSAS / DWH data - Design Microsoft DWH & its ETL packages - Able to code T-SQL - Able to create Orchestrations - Able to design batch jobs / Orchestrations runs - Familiarity with data models - Able to develop MDM (Master Data Management) Experience: ================ - Experience as DWH developer with Microsoft DWH data flows and cubes - Exposure and experience with Azure services including Azure Data Factory - Sound knowledge of BI practices and visualization tools such as PowerBI / SSRS/ QlikView - Collecting / gathering data from various multiple source systems - Creating automated data pipelines - Configuring Azure resources and services Skills: ================ - Microsoft SSIS / SSAS / SSRS - Informatica - Azure Data Factory - Spark - SQL Nice to have: ========== - Any on site experience is added advantage, but not mandatory - Microsoft certifications are added advantage Business Vertical: ============== - Banking / Investment Banking - Capital Markets - Securities / Stock Market Trading - Bonds / Forex Trading - Credit Risk - Payments Cards Industry (VISA/ Master Card/ Amex) Job Code: ====== MSBI_DEVP_0525 No.of positions: ============ 05 Email: ===== spectrumconsulting1977@gmail.com if you are interested, please email your CV as ATTACHMENT with job ref. code [ MSBI_DEVP_0525 ] as subject

Posted 3 weeks ago

Apply

9 - 11 years

37 - 40 Lacs

Ahmedabad, Bengaluru, Mumbai (All Areas)

Work from Office

Naukri logo

Dear Candidate, We are hiring a Data Engineer to build scalable data pipelines and infrastructure to power analytics and machine learning. Ideal for those passionate about data integrity, automation, and performance. Key Responsibilities: Design ETL/ELT pipelines using tools like Airflow or dbt Build data lakes and warehouses (BigQuery, Redshift, Snowflake) Automate data quality checks and monitoring Collaborate with analysts, data scientists, and backend teams Optimize data flows for performance and cost Required Skills & Qualifications: Proficiency in SQL, Python, and distributed systems (e.g., Spark) Experience with cloud data platforms (AWS, GCP, or Azure) Strong understanding of data modeling and warehousing principles Bonus: Experience with Kafka, Parquet/Avro, or real-time streaming Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies

Posted 4 weeks ago

Apply

8 - 12 years

35 - 60 Lacs

Bengaluru

Work from Office

Naukri logo

Job Summary NetApp is a cloud-led, data-centric software company that helps organizations put data to work in applications that elevate their business. We help organizations unlock the best of cloud technology. As a member of Solutions Integration Engineering you work cross-functionally to define and create engineered solutions /products which would accelerate the field adoption. We work closely with ISV’s and with the startup ecosystem in the Virtualization, Cloud, and AI/ML domains to build solutions that matter for the customers You will work closely with product owner and product lead on the company's current and future strategies related to said domains. Job Requirements • Lead to deliver features, including participating in the full software development lifecycle. • Deliver reliable, innovative solutions and products • Participate in product design, development, verification, troubleshooting, and delivery of a system or major subsystems, including authoring project specifications. • Work closely with cross-functional teams including business stakeholders to innovate and unlock new use-cases for our customers • Write unit and automated integrationtests and project documentation • Mentor the junior’s in the team Technical Skills • Understanding of Software development lifecycle • Proficiency in full stack development ~ Python, Container Ecosystem, Cloud and Modern ML frameworks • Knowledge of Data storage and Artificial intelligence concepts including server/storage architecture, batch/stream processing, data warehousing, data lakes, distributed filesystems, OLTP/OLAP databases and data pipelining tools, model, inferencing as well as RAG workflows. • Exposure on Data pipeline, integrations and Unix based operating system kernels and development environments, e.g. Linux or FreeBSD. • A strong understanding of basic to complex concepts related to computer architecture, data structures, and new programming paradigms • Demonstrated creative and systematic approach to problem solving. • Possess excellent written and verbal communication skills. Education • Minimum 8 years of experience and must be hands-on with coding. • B.E/B.Tech or M.S in Computer Science or related technical field.

Posted 1 month ago

Apply

8 - 13 years

15 - 30 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Naukri logo

Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 8 - 15 Yrs Location: Pan India Job Description: Minimum Two years experience in Boomi Data modeling Interested can share your resume to sankarspstaffings@gmail.com with below inline details. Over All Exp : Relevant Exp : Current CTC : Expected CTC : Notice Period :

Posted 1 month ago

Apply

7 - 11 years

15 - 19 Lacs

Hyderabad

Work from Office

Naukri logo

ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. What you will do Role Description: We are seeking a Data Solutions Architect to design, implement, and optimize scalable and high-performance data solutions that support enterprise analytics, AI-driven insights, and digital transformation initiatives. This role will focus on data strategy, architecture, governance, security, and operational efficiency, ensuring seamless data integration across modern cloud platforms. The ideal candidate will work closely with engineering teams, business stakeholders, and leadership to establish a future-ready data ecosystem, balancing performance, cost-efficiency, security, and usability. This position requires expertise in modern cloud-based data architectures, data engineering best practices, and Scaled Agile methodologies. Roles & Responsibilities: Design and implement scalable, modular, and future-proof data architectures that support enterprise data lakes, data warehouses, and real-time analytics. Develop enterprise-wide data frameworks that enable governed, secure, and accessible data across various business domains. Define data modeling strategies to support structured and unstructured data, ensuring efficiency, consistency, and usability across analytical platforms. Lead the development of high-performance data pipelines for batch and real-time data processing, integrating APIs, streaming sources, transactional systems, and external data platforms. Optimize query performance, indexing, caching, and storage strategies to enhance scalability, cost efficiency, and analytical capabilities. Establish data interoperability frameworks that enable seamless integration across multiple data sources and platforms. Drive data governance strategies, ensuring security, compliance, access controls, and lineage tracking are embedded into enterprise data solutions. Implement DataOps best practices, including CI/CD for data pipelines, automated monitoring, and proactive issue resolution, to improve operational efficiency. Lead Scaled Agile (SAFe) practices, facilitating Program Increment (PI) Planning, Sprint Planning, and Agile ceremonies, ensuring iterative delivery of enterprise data capabilities. Collaborate with business stakeholders, product teams, and technology leaders to align data architecture strategies with organizational goals. Act as a trusted advisor on emerging data technologies and trends, ensuring that the enterprise adopts cutting-edge data solutions that provide competitive advantage and long-term scalability. What we expect of you Must-Have Skills: Experience in data architecture, enterprise data management, and cloud-based analytics solutions. Expertise in Databricks, cloud-native data platforms, and distributed computing frameworks. Strong proficiency in modern data modeling techniques, including dimensional modeling, NoSQL, and data virtualization. Experience designing high-performance ETL/ELT pipelines and real-time data processing solutions. Deep understanding of data governance, security, metadata management, and access control frameworks. Hands-on experience with CI/CD for data solutions, DataOps automation, and infrastructure as code (IaaC). Proven ability to collaborate with cross-functional teams, including business executives, data engineers, and analytics teams, to drive successful data initiatives. Strong problem-solving, strategic thinking, and technical leadership skills. Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with Apache Spark Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Good-to-Have Skills: Good to have deep expertise in Biotech & Pharma industries Experience with Data Mesh architectures and federated data governance models. Certification in cloud data platforms or enterprise architecture frameworks. Knowledge of AI/ML pipeline integration within enterprise data architectures. Familiarity with BI & analytics platforms for enabling self-service analytics and enterprise reporting. Education and Professional Certifications Doctorate Degree with 6-8 + years of experience in Computer Science, IT or related field OR Master’s degree with 8-10 + years of experience in Computer Science, IT or related field OR Bachelor’s degree with 10-12 + years of experience in Computer Science, IT or related field AWS Certified Data Engineer preferred Databricks Certificate preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.

Posted 1 month ago

Apply

7 - 11 years

10 - 14 Lacs

Hyderabad

Work from Office

Naukri logo

What you will do Let’s do this. Let’s change the world. In this vital role you will drive the development and implementation of our data strategy. The ideal candidate possesses a strong blend of technical expertise and data-driven problem-solving skills. As a Senior Data Engineer, you will play a crucial role in designing, building, and optimizing our data pipelines and platforms while mentoring junior engineers. Roles & Responsibilities: Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions. Take ownership of data pipeline projects from inception to deployment, managing scope, timelines, and risks. Ensure data quality and integrity through rigorous testing and monitoring. Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions. Work closely with data analysts, data scientists, and business collaborators to understand data requirements. Identify and resolve complex data-related challenges. Adhere to data engineering best practices and standards. Experience developing in an Agile development environment, and comfortable with Agile terminology and ceremonies. Familiarity with code versioning using GIT, Jenkins and code migration tools. Exposure to Jira or Rally. Identifying and implementing opportunities for automation and CI/CD. Stay up to date with the latest data technologies and trends. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Doctorate degree and 2 years of Computer Science, IT or related field experience OR Master’s degree and 8 to 10 years of Computer Science, IT or related field experience OR Bachelor’s degree and 10 to 14 years of Computer Science, IT or related field experience OR Diploma and 14 to 18 years of Computer Science, IT or related field experience Preferred Qualifications: Functional Skills: Must-Have Skills (Not more than 3 to 4): Demonstrated hands-on experience with cloud platforms (AWS, Azure, GCP) and the ability to architect cost-effective and scalable data solutions. Proficiency in Python, PySpark, SQL. Hands on experience with big data ETL performance tuning. Strong development knowledge in Databricks. Strong analytical and problem-solving skills to address complex data challenges. Good-to-Have Skills: Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced working with Apache Spark, Apache Airflow Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Experience in SQL/NOSQL database, vector database for large language models Experience with prompt engineering, model fine tuning Experience with DevOps/MLOps CICD build and deployment pipeline Professional Certifications (please mention if the certification is preferred or mandatory for the role): AWS Certified Data Engineer (preferred) Databricks Certification (preferred) Any SAFe Agile certification (preferred) Soft Skills: Initiative to explore alternate technology and approaches to solving problems. Skilled in breaking down problems, documenting problem statements, and estimating efforts. Effective communication and interpersonal skills to collaborate with multi-functional teams. Excellent analytical and solving skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now for a career that defies imagination Objects in your future are closer than they appear. Join us. careers.amgen.com

Posted 1 month ago

Apply

- 2 years

3 - 5 Lacs

Hyderabad

Work from Office

Naukri logo

ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. What we expect of you Role Description: We are looking for an Associate Data Engineer with deep expertise in writing data pipelines to build scalable, high-performance data solutions. The ideal candidate will be responsible for developing, optimizing and maintaining complex data pipelines, integration frameworks, and metadata-driven architectures that enable seamless access and analytics. This role prefers deep understanding of the big data processing, distributed computing, data modeling, and governance frameworks to support self-service analytics, AI-driven insights, and enterprise-wide data management. Roles & Responsibilities: Data Engineer who owns development of complex ETL/ELT data pipelines to process large-scale datasets Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Ensuring data integrity, accuracy, and consistency through rigorous quality checks and monitoring Exploring and implementing new tools and technologies to enhance ETL platform and performance of the pipelines Proactively identify and implement opportunities to automate tasks and develop reusable frameworks Eager to understand the biotech/pharma domains & build highly efficient data pipelines to migrate and deploy complex data across systems Work in an Agile and Scaled Agile (SAFe) environment, collaborating with cross-functional teams, product owners, and Scrum Masters to deliver incremental value Use JIRA, Confluence, and Agile DevOps tools to manage sprints, backlogs, and user stories. Support continuous improvement, test automation, and DevOps practices in the data engineering lifecycle Collaborate and communicate effectively with the product teams, with cross-functional teams to understand business requirements and translate them into technical solutions What we expect of you Must-Have Skills: Experience in Data Engineering with a focus on Databricks, AWS, Python, SQL, and Scaled Agile methodologies Proficiency & Strong understanding of data processing and transformation of big data frameworks (Databricks, Apache Spark, Delta Lake, and distributed computing concepts) Strong understanding of AWS services and can demonstrate the same Ability to quickly learn, adapt and apply new technologies Strong problem-solving and analytical skills Excellent communication and teamwork skills Experience with Scaled Agile Framework (SAFe), Agile delivery, and DevOps practices Good-to-Have Skills: Data Engineering experience in Biotechnology or pharma industry Exposure to APIs, full stack development Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Education and Professional Certifications Bachelor’s degree and 2 to 5 + years of Computer Science, IT or related field experience OR Master’s degree and 1 to 4 + years of Computer Science, IT or related field experience AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. What you can expect of us EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation. Apply now and make a lasting impact with the Amgen team. careers.amgen.com

Posted 1 month ago

Apply

2 - 5 years

3 - 5 Lacs

Hyderabad

Work from Office

Naukri logo

ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. ABOUT THE ROLE Role Description: We are looking for an Associate Data Engineer with deep expertise in writing data pipelines to build scalable, high-performance data solutions. The ideal candidate will be responsible for developing, optimizing and maintaining complex data pipelines, integration frameworks, and metadata-driven architectures that enable seamless access and analytics. This role prefers deep understanding of the big data processing, distributed computing, data modeling, and governance frameworks to support self-service analytics, AI-driven insights, and enterprise-wide data management. Roles & Responsibilities: Data Engineer who owns development of complex ETL/ELT data pipelines to process large-scale datasets Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Ensuring data integrity, accuracy, and consistency through rigorous quality checks and monitoring Exploring and implementing new tools and technologies to enhance ETL platform and performance of the pipelines Proactively identify and implement opportunities to automate tasks and develop reusable frameworks Eager to understand the biotech/pharma domains & build highly efficient data pipelines to migrate and deploy complex data across systems Work in an Agile and Scaled Agile (SAFe) environment, collaborating with cross-functional teams, product owners, and Scrum Masters to deliver incremental value Use JIRA, Confluence, and Agile DevOps tools to manage sprints, backlogs, and user stories. Support continuous improvement, test automation, and DevOps practices in the data engineering lifecycle Collaborate and communicate effectively with the product teams, with cross-functional teams to understand business requirements and translate them into technical solutions Must-Have Skills: Experience in Data Engineering with a focus on Databricks, AWS, Python, SQL, and Scaled Agile methodologies Proficiency & Strong understanding of data processing and transformation of big data frameworks (Databricks, Apache Spark, Delta Lake, and distributed computing concepts) Strong understanding of AWS services and can demonstrate the same Ability to quickly learn, adapt and apply new technologies Strong problem-solving and analytical skills Excellent communication and teamwork skills Experience with Scaled Agile Framework (SAFe), Agile delivery, and DevOps practices Good-to-Have Skills: Data Engineering experience in Biotechnology or pharma industry Exposure to APIs, full stack development Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Education and Professional Certifications Any degree and 2-5 years of experience AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.

Posted 1 month ago

Apply

6 - 10 years

12 - 22 Lacs

Coimbatore

Work from Office

Naukri logo

Looking for Database Developer

Posted 1 month ago

Apply

10 - 15 years

30 - 35 Lacs

Noida

Remote

Naukri logo

SR. DATA MODELER FULL-TIME ROLE REMOTE OR ONSITE Job Summary: We are seeking an experienced Data Modeler to support the Enterprise Data Platform (EDP) initiative, focusing on building and optimizing curated data assets on Google BigQuery. This role requires expertise in data modeling, strong knowledge of retail data, and an ability to collaborate with data engineers, business analysts, and architects to create scalable and high-performing data structures. Required Qualifications: 5+ years of experience in data modeling and architecture in cloud data platforms (BigQuery preferred). Expertise in dimensional modeling (Kimball), data vault, and normalization/denormalization techniques. Strong SQL skills, with hands-on experience in BigQuery performance tuning (partitioning, clustering, query optimization). Understanding of retail data models (e.g., sales, inventory, pricing, supply chain, customer analytics). Experience working with data engineering teams to implement models in ETL/ELT pipelines. Familiarity with data governance, metadata management, and data cataloging. Excellent communication skills and ability to translate business needs into structured data models. Key Responsibilities: 1. Data Modeling & Curated Layer Design Design logical, conceptual, and physical data models for the EDPs curated layer in BigQuery. Develop fact and dimension tables, ensuring adherence to dimensional modeling best practices (Kimball methodology). Optimize data models for performance, scalability, and query efficiency in a cloud-native environment. Work closely with data engineers to translate models into efficient BigQuery implementations (partitioning, clustering, materialized views). 2. Data Standardization & Governance Define and maintain data definitions, relationships, and business rules for curated assets. Ensure data integrity, consistency, and governance across datasets. Work with Data Governance teams to align models with enterprise data standards and metadata management policies. 3. Collaboration with Business & Technical Teams Engage with business analysts and product teams to understand data needs, ensuring models align with business requirements. Partner with data engineers and architects to implement best practices for data ingestion and transformation. Support BI & analytics teams by ensuring curated models are optimized for downstream consumption (e.g., Looker, Tableau, Power BI, AI/ML models, APIs). Please share the following details along with the most updated resume to geeta.negi@compunnel.com if you are interested in the opportunity: Total Experience Relevant experience Current CTC Expected CTC Notice Period (Last working day if you are serving the notice period) Current Location SKILL 1 RATING OUT OF 5 SKILL 2 RATING OUT OF 5 SKILL 3 RATING OUT OF 5 (Mention the skill)

Posted 1 month ago

Apply

5 - 10 years

7 - 12 Lacs

Hyderabad

Work from Office

Naukri logo

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Snowflake Data Warehouse Good to have skills : Snowflake Schema Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. Your day will involve creating innovative solutions to address business needs and ensuring applications are tailored to meet specific requirements. Roles & Responsibilities:Implement snowflake cloud data warehouse and cloud related architecture. Migrating from various sources to Snowflake.Work on Snowflake capabilities such as Snow pipe, Stages, Snow SQL, Streams, and tasks.Implement snowflake advanced concepts like setting up resource monitor, RBAC controls, Virtual Warehouse sizing, zero copy clone.In-depth knowledge and experience in data migration from RDBMS to Snowflake cloud data warehouseDeploy the snowflake features such as data sharing, event, and lake house patterns.Implement Incremental extraction loads - batched and streaming.Must Have- Snowflake certification Professional & Technical Skills: Must To Have Skills:Proficiency in Snowflake Data Warehouse Good To Have Skills:Experience with Snowflake Schema Strong understanding of data warehousing concepts Experience in ETL processes and data modeling Knowledge of SQL and database management Ability to troubleshoot and debug applications Additional Information: The candidate should have a minimum of 5 years of experience in Snowflake Data Warehouse This position is based at our Hyderabad office A 15 years full-time education is required Qualifications 15 years full time education

Posted 1 month ago

Apply

2 - 6 years

18 - 25 Lacs

Pune

Work from Office

Naukri logo

Senior Associate, Full Stack Engineer At BNY, our culture empowers you to grow and succeed. As a leading global financial services company at the center of the world s financial system we touch nearly 20% of the world s investible assets. Every day around the globe, our 50,000+ employees bring the power of their perspective to the table to create solutions with our clients that benefit businesses, communities and people everywhere. We continue to be a leader in the industry, awarded as a top home for innovators and for creating an inclusive workplace. Through our unique ideas and talents, together we help make money work for the world. This is what #LifeAtBNY is all about. We re seeking a future team member for the role Senior Associate, Full Stack Engineer to join our Compliance engineering. This role is in Pune, MH-HYBRID. In this role, you ll make an impact in the following ways: Overall 2-6 years of experience with ETL, Databases, Data warehouses etc. Need to have in-depth technical knowledge as a Pentaho ETL developer and should feel comfortable working within large internal and external data sets. Experience in OLAP and OLTP and data warehousing and data model concepts Having good experience in Vertica, Oracle, Denodo and similar kind of databases Experienced in Design, Development, and Implementation of large - scale projects in financial industries using Data Warehousing ETL tools (Pentaho) Experience in creating ETL transformations and jobs using Pentaho Kettle Spoon designer and Pentaho Data Integration Designer and scheduling. Proficient in writing - SQL Statements, Complex Stored Procedures, Dynamic SQL queries, Batches, Scripts, Functions, Triggers, Views, Cursors and Query Optimization Excellent data analysis skills Working knowledge of source control tools such as GitLab Good Analytical skills. Good in PDI architecture Having good experience in Splunk is plus. To be successful in this role, we re seeking the following: Graduates of bachelor s degree programs in business, related discipline, or equivalent work experience Relevant domain expertise in alternative investment Services domain or capital markets and financial services domain is required. At BNY, our culture speaks for itself. Here s a few of our awards: America s Most Innovative Companies, Fortune, 2024 World s Most Admired Companies, Fortune 2024 Human Rights Campaign Foundation, Corporate Equality Index, 100% score, 2023-2024 Best Places to Work for Disability Inclusion , Disability: IN - 100% score, 2023-2024 Most Just Companies , Just Capital and CNBC, 2024 Dow Jones Sustainability Indices, Top performing company for Sustainability, 2024 Bloomberg s Gender Equality Index (GEI), 2023 Our Benefits and Rewards: BNY offers highly competitive compensation, benefits, and wellbeing programs rooted in a strong culture of excellence and our pay-for-performance philosophy. We provide access to flexible global resources and tools for your life s journey. Focus on your health, foster your personal resilience, and reach your financial goals as a valued member of our team, along with generous paid leaves, including paid volunteer time, that can support you and your family through moments that matter. BNY is an Equal Employment Opportunity/Affirmative Action Employer - Underrepresented racial and ethnic groups/Females/Individuals with Disabilities/Protected Veterans.

Posted 1 month ago

Apply

6 - 9 years

20 - 25 Lacs

Bengaluru

Hybrid

Naukri logo

Company Description Epsilon is the leader in outcome-based marketing. We enable marketing that's built on proof, not promises. Through Epsilon PeopleCloud, the marketing platform for personalizing consumer journeys with performance transparency, Epsilon helps marketers anticipate, activate and prove measurable business outcomes. Powered by CORE ID, the most accurate and stable identity management platform representing 200+ million people, Epsilon's award-winning data and technology is rooted in privacy by design and underpinned by powerful AI. With more than 50 years of experience in personalization and performance working with the world's top brands, agencies and publishers, Epsilon is a trusted partner leading CRM, digital media, loyalty and email programs. Positioned at the core of Publicis Groupe, Epsilon is a global company with over 8,000 employees in over 40 offices around the world. For more information, visit https://www.epsilon.com/apac (APAC). Follow us on Twitter at @EpsilonMktg. Click here to view how Epsilon transforms marketing with 1 View, 1 Vision and 1 Voice. https://www.epsilon.com/apac/youniverse Wondering what it's like to work with Epsilon? Check out this video that captures the spirit of our resilient minds, our values and our great culture. Job Description The Product team forms the crux of our powerful platforms and connects millions of customers to the product magic. This team of innovative thinkers develop and build products that help Epsilon be a market differentiator. They map the future and set new standards for our products, empowered with industry best practices, ML and AI capabilities. The team passionately delivers intelligent end-to-end solutions and plays a key role in Epsilon's success story. Candidate will be the Senior Software Engineer for Business Intelligence team in the Product Engineering group. The Business Intelligence team partners with internal and external clients and technology providers, to develop, implement, and manage state-of-the-art data analytics, business intelligence and data visualization solutions for our marketing products. The Sr Software Engineer will be an individual with strong technical expertise on business intelligence and analytics solutions/tools and work on the BI strategy in terms of toolset selection, report and visualization best practices, team training, and environment efficiency. Why we are looking for you You are an individual with combination of technical leadership and architectural design skills. You have a solid foundation in business intelligence and analytics solutions/tools. You have experience in Product Engineering & Software Development using Tableau and SAP Business Objects, Kibana Dashboard development. You have experience in data integration tools like Databricks. You excel at collaborating with different stakeholders (ERP, CRM, Data Hub and Business stakeholders) to success. You have a strong experience of building reusable database components using SQL queries You enjoy new challenges and are solution oriented. You like mentoring people and enable collaboration of the highest order What you will enjoy in this role As part of the Epsilon Product Engineering team, the pace of the work matches the fast-evolving demands of Fortune 500 clients across the globe. As part of an innovative team that's not afraid to take risks, your ideas will come to life in digital marketing products that support more than 50% automotive dealers in the US. The open and transparent environment that values innovation and efficiency. Exposure to all the different Epsilon Products where reporting plays a key role for the efficient decision-making abilities to the end users. What you will do Work on our BI strategy in terms of toolset selection, report and visualization best practices, team training, and environment efficiency. Analyze requirements and design data analytics and enterprise reporting solutions in various frameworks (such as Tableau, SAP Business Objects, and others) as part of the enterprise, multi-tier, customer-facing applications. Strong technical hands-on to develop data analytics solutions and enterprise reporting solutions in frameworks (such as Tableau, SAP Business Objects, and Kibana). Good to have scripting skills on Python. Build data integration & aggregate pipelines using Databricks. Provide estimates for BI solutions to be developed and deployed. Develop and support cloud infrastructure for BI solutions including automation, process definition and support documentation as required. Work in an agile environment and align with agile / scrum methodology for development work. Follow Data Management processes and procedures and provide input to the creation of data definitions, business rules and data access methods. Collaborate with database administrators and data warehouse architects on data access patterns to optimize data visualization and processing. Assess and come up with infrastructure design for BI solutions catering to system availability and fault tolerance needs. Establish best practices of workloads on multi-tenant deployments. Document solutions and train implementation and operational support teams. Assess gaps in solutions and make recommendations on how to solve the problem. Understand the priorities of various projects and help steer organizational tradeoffs to help focus on the most important initiatives. Show initiative and take responsibility for decisions that impact project and team goals Qualifications BE/ B. Tech/ MCA only, No correspondence course 7+ years of overall technical hands-on experience with good to have supervisory experience Experience in developing BI solutions in enterprise reporting frameworks Experience in designing semantic layer in reporting frameworks and developing reporting model on an OLTP or OLAP environment. Experience working with large data sets, both structured & unstructured, Datawarehouse and Data lakes. Strong knowledge in multitenancy concepts, object, folder and user group templates and user access models in BI reporting tool frameworks, including single sign-on integrations with identity and access management systems such as Okta. Experience in performing periodic sizing, establishing monitoring, backup and restore procedures catering to MTTR and MTBF expectations. Working knowledge of OLTP and relational database concepts and data warehouse concepts/best practices and data modeling Experience in documenting technical design and procedures, reusable artifacts and provide technical guidance as needed. Familiarity with cloud stack (AWS, Azure) & cloud deployments and tools Ability to work on multiple assignments concurrently.

Posted 1 month ago

Apply

8 - 10 years

25 - 30 Lacs

Bengaluru

Work from Office

Naukri logo

Number of Openings* 1 ECMS Request no in sourcing stage * 525266 Duration of contract* 12 Months Total Yrs. of Experience* 8-10 Yrs. Detailed JD *(Roles and Responsibilities) Manage and maintain NoSQL database systems to ensure optimal performance, Monitor database health and troubleshoot performance issues, Implement and maintain database security measures to protect sensitive data, Collaborate with development teams to design efficient data models, Perform database backups and develop disaster recovery plans. Design, manage, and optimize relational databases, Configure, deploy, and support SQL Server databases, Ensure data security and integrity while managing SQL databases. Analyze and translate business needs into data models, Develop conceptual, logical, and physical data models, Create and enforce database development standards, Validate and reconcile data models to ensure accuracy, Maintain and update existing data models, Mandatory skills* knowledge of OLTP, OLAP Data modeling, NoSQL db.; mongo DB preferred, Desired skills* Should be good at SQL,PL/SQL; experience in MySQL is bonus. Must have interpersonal skills to work with client and understand data model of Insurance systems Domain* Insurance Approx. vendor billing rate excluding service tax* 7588 INR/Day Precise Work Location* (E.g. Bangalore Infosys SEZ or STP) No constraint; Mumbai Bengaluru Pune preferred BG Check ( Before OR After onboarding) Pre-Onboarding Any client prerequisite BGV Agency* NA Is there any working in shifts from standard Daylight (to avoid confusions post onboarding)* IST only

Posted 1 month ago

Apply

10 - 15 years

10 - 14 Lacs

Kolkata, Mumbai, New Delhi

Work from Office

Naukri logo

The position is part of the Solutions Integration practice which focuses on the integration of information, process and people through the application of multiple technologies. The candidate is expected to handle small to medium scale consulting projects and should possess skills in the design, development, integration, and deployment of data extraction/load programs. Previous experience within Banking and Financial Services is preferred. To be considered, a candidate should be available for traveling (5% or more) and possess the required skills as mentioned below. The position will be based in the C&R Software Office in Bangalore, India . We shall offer Hybrid model of working. Position description - Solution Integration - Lead ETL Consultant - Band D Role/responsibilities: Design, develop, deploy, and support modules of our world-class enterprise-level solution into our international client base Drive technical architecture and design process in conjunction with client requirements Evaluate new design specifications, raise quality standards, and address architectural concerns Evaluate stability, compatibility, scalability, interoperability, and performance of the solution Own design aspects, performance, re-startablity, logging, error handling, security for both on-premises and cloud customers Continually learn new technologies in related areas Single point of contact (SPOC) for the technical implementation. Work with Project Manager to plan and deliver projects from requirements till Go-Live Be responsible to successfully deliver projects with accountability and ownership Getting the broader picture of the project and contributing accordingly. This includes understanding the business, technical & architectural aspects of project implementations. Thought process to build reusable artefacts and to make use of them to reduce development / testing / deployment / maintenance efforts. Ability to work with multiple customers at the same time. Adaptability to work in SDLC, Iterative and Agile methodology. Interact with clients/onsite team members to understand project requirements and goals. Lead client workshops (face to face or over the phone) for consulting, drive solutioning and issue resolution with client Follow up and escalate gaps, issues and enhancements identified throughout the project and drive them to closure Display high level of knowledge and consistent service in all interactions with client Establish positive client relationship(s) to facilitate our implementations Support client activities throughout the implementation of project life cycle including testing phases Support & review test strategy / planning of the end to end solution Lead in developing detailed business and technical specifications based on project requirements and turn into data extraction/load programs. Program ETL tool with business rules to be applied to data from source input to target data repository. Develop and assist in automating data extraction/load programs to run on regular schedule. Assist in managing daily, weekly, and monthly data operations and scheduled processes. Perform data conversion, quality, and integrity checks for all programs/processes Mentor junior members on team and be responsible for their deliverables Engage in Pre-Sales demonstrations, providing solutions and providing estimates In addition to these skills, the individual needs to be skilled in business analysis and knowledge acquisition . An integration consultant interacts with clients (both business and technical personnel) on a constant basis. Hence, it is necessary that an Integration Consultant have extremely good communication skills, should be able to listen carefully to clients and facilitate information gathering sessions. Skills/Experience requirements: Overall 10+ years of IT industry experience Undergraduate / Graduate in Computer Science or Computer Applications such as B. Sc. / B.C.A. / B. Tech. / B. E. / M. Sc. / M. Tech. / M. E. / M.C.A. Strong experience in understanding business requirements and converting those requirements into detailed functional and technical specifications 7 years experience with ETL tool preferably Kettle with knowledge on Metadata Injection, Kettle DB logging, Carte. 7 years experience in writing PL/SQL or T-SQL programming and queries on Oracle / SQL Server Strong knowledge on RDBMS concept and OLTP system architecture. Minimum 5 years experience in writing shell scripts on UNIX Sun Solaris Competent with SQL/database, SQL Server / Postgres, SSRS and other analytical programs, with the desire and ability to understand new software applications Experience reviewing query performance and optimizing/developing more efficient code Experience with creating table indexes to improve database performance Experience writing complex operations, views, stored procedures, triggers and functions to support business needs in a high availability environment Strong knowledge on source code control mechanism on any tool. Knowledge on GIT / BitBucket is added advantage. Strong knowledge of XML and JSON structures and Jenkins. Experience of job scheduling and working knowledge on at least one 3rd party scheduler Hands-on experience in AWS services like PostgreSQL, Aurora, Lambda is preferred Ability to perform data research and root cause analysis on data issues/discrepancies. Experience utilizing SOAP and REST to access web services Experience in Javascript, HTML, CSS Excellent written and verbal communication skills Excellent inter-personal skills and comfortable establishing professional relationships especially remotely (electronic, phone, written) Proven ability to plan and execute effectively to meet critical time-sensitive objectives Ability to effectively work alone and independently Experience in either the Banking or Financial Industry is preferred Experience in SSRS reports development Working knowledge of Python scripting is preferred Good mentorship skills Ability to deliver effectively in high pressure situations

Posted 1 month ago

Apply

2 - 6 years

5 - 9 Lacs

Hyderabad

Work from Office

Naukri logo

AWS Data Engineer: ***************** As an AWS Data Engineer, you will contribute to our client and will have the below responsibilities: Work with technical development team and team lead to understand desired application capabilities. Candidate would need to do development using application development by lifecycles, & continuous integration/deployment practices. Working to integrate open-source components into data-analytic solutions Willingness to continuously learn & share learnings with others Required: 5+ years of direct applicable experience with key focus: Glue and Python; AWS; Data Pipeline creation Develop code using Python, such as o Developing data pipelines from various external data sources to internal data. Use of Glue for extracting data from the design data base. Developing Python APIs as needed Minimum 3 years of hands-on experience in Amazon Web Services including EC2, VPC, S3, EBS, ELB, Cloud-Front, IAM, RDS, Cloud Watch. Able to interpret business requirements, analyzing, designing and developing application on AWS Cloud and ETL technologies Able to design and architect server less application using AWS Lambda, EMR, and DynamoDB Ability to leverage AWS data migration tools and technologies including Storage Gateway, Database Migration and Import Export services. Understands relational database design, stored procedures, triggers, user-defined functions, SQL jobs. Familiar with CI/CD tools e.g., Jenkins, UCD for Automated application deployments Understanding of OLAP, OLTP, Star Schema, Snow Flake Schema, Logical/Physical/Dimensional Data Modeling. Ability to extract data from multiple operational sources and load into staging, Data warehouse, Data Marts etc. using SCDs (Type 1/Type 2/ Type 3/Hybrid) loads. Familiar with Software Development Life Cycle (SDLC) stages in a Waterfall and Agile environment. Nice to have: Familiar with the use of source control management tools for Branching, Merging, Labeling/Tagging and Integration, such as GIT and SVN. Experience working with UNIX/LINUX environments Hand-on experience with IDEs such as Jupiter Notebook Education & Certification University degree or diploma and applicable years of experience Job Segment Developer, Open Source, Data Warehouse, Cloud, Database, Technology

Posted 1 month ago

Apply

12 - 22 years

35 - 60 Lacs

Chennai

Hybrid

Naukri logo

Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level 5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 8 - 24 Yrs Location- Pan India Job Description : - The Data Modeler will be responsible for the design, development, and maintenance of data models and standards for Enterprise data platforms. Build dimensional data models applying best practices and providing business insights. Build data warehouse & data marts (on Cloud) while performing data profiling and quality analysis. Identify business needs and translate business requirements to Conceptual, Logical, Physical and semantic, multi-dimensional (star, snowflake), normalized/denormalized, Data Vault2.0 model for the project. Knowledge of snowflake and dbt is added advantage. Create and maintain the Source to Target Data Mapping document for this project. Includes documentation of all entities, attributes, data relationships, primary and foreign key structures, allowed values, codes, business rules, glossary terms, etc. Develop best practices for standard naming conventions and coding practices to ensure consistency of data models. Gather and Publish Data Dictionaries: Maintain data models and capture data models from existing databases and record descriptive information. Work with the Development team to implement data strategies, build data flows and develop conceptual data models. Create logical and physical data models using best practices to ensure high data quality and reduced redundancy. Optimize and update logical and physical data models to support new and existing projects. Data profiling, business domain modeling, logical data modeling, physical dimensional data modeling and design. Data design and performance optimization for large Data Warehouse solutions. understanding data - profile and analysis (metadata (formats, definitions, valid values, boundaries), relationship/usage) Create relational and dimensional structures for large (multi-terabyte) operational, analytical, warehouse and BI systems. Should be good in verbal and written communication. If interested please forward your updated resume to sankarspstaffings@gmail.com / Sankar@spstaffing.in With Regards, Sankar G Sr. Executive - IT Recruitment

Posted 1 month ago

Apply

12 - 22 years

35 - 60 Lacs

Kolkata

Hybrid

Naukri logo

Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level 5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 8 - 24 Yrs Location- Pan India Job Description : - The Data Modeler will be responsible for the design, development, and maintenance of data models and standards for Enterprise data platforms. Build dimensional data models applying best practices and providing business insights. Build data warehouse & data marts (on Cloud) while performing data profiling and quality analysis. Identify business needs and translate business requirements to Conceptual, Logical, Physical and semantic, multi-dimensional (star, snowflake), normalized/denormalized, Data Vault2.0 model for the project. Knowledge of snowflake and dbt is added advantage. Create and maintain the Source to Target Data Mapping document for this project. Includes documentation of all entities, attributes, data relationships, primary and foreign key structures, allowed values, codes, business rules, glossary terms, etc. Develop best practices for standard naming conventions and coding practices to ensure consistency of data models. Gather and Publish Data Dictionaries: Maintain data models and capture data models from existing databases and record descriptive information. Work with the Development team to implement data strategies, build data flows and develop conceptual data models. Create logical and physical data models using best practices to ensure high data quality and reduced redundancy. Optimize and update logical and physical data models to support new and existing projects. Data profiling, business domain modeling, logical data modeling, physical dimensional data modeling and design. Data design and performance optimization for large Data Warehouse solutions. understanding data - profile and analysis (metadata (formats, definitions, valid values, boundaries), relationship/usage) Create relational and dimensional structures for large (multi-terabyte) operational, analytical, warehouse and BI systems. Should be good in verbal and written communication. If interested please forward your updated resume to sankarspstaffings@gmail.com / Sankar@spstaffing.in With Regards, Sankar G Sr. Executive - IT Recruitment

Posted 1 month ago

Apply

12 - 22 years

35 - 60 Lacs

Noida

Hybrid

Naukri logo

Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level 5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 8 - 24 Yrs Location- Pan India Job Description : - The Data Modeler will be responsible for the design, development, and maintenance of data models and standards for Enterprise data platforms. Build dimensional data models applying best practices and providing business insights. Build data warehouse & data marts (on Cloud) while performing data profiling and quality analysis. Identify business needs and translate business requirements to Conceptual, Logical, Physical and semantic, multi-dimensional (star, snowflake), normalized/denormalized, Data Vault2.0 model for the project. Knowledge of snowflake and dbt is added advantage. Create and maintain the Source to Target Data Mapping document for this project. Includes documentation of all entities, attributes, data relationships, primary and foreign key structures, allowed values, codes, business rules, glossary terms, etc. Develop best practices for standard naming conventions and coding practices to ensure consistency of data models. Gather and Publish Data Dictionaries: Maintain data models and capture data models from existing databases and record descriptive information. Work with the Development team to implement data strategies, build data flows and develop conceptual data models. Create logical and physical data models using best practices to ensure high data quality and reduced redundancy. Optimize and update logical and physical data models to support new and existing projects. Data profiling, business domain modeling, logical data modeling, physical dimensional data modeling and design. Data design and performance optimization for large Data Warehouse solutions. understanding data - profile and analysis (metadata (formats, definitions, valid values, boundaries), relationship/usage) Create relational and dimensional structures for large (multi-terabyte) operational, analytical, warehouse and BI systems. Should be good in verbal and written communication. If interested please forward your updated resume to sankarspstaffings@gmail.com / Sankar@spstaffing.in With Regards, Sankar G Sr. Executive - IT Recruitment

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies