Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
8 - 10 years
0 Lacs
Ranchi, Jharkhand, India
Remote
Experience : 8.00 + years Salary : USD 60000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT-05:00) America/Atikokan (EST) Opportunity Type : Remote Placement Type : Full Time Contract for 12 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - Andela) What do you need for this opportunity? Must have skills required: aws (amazon web services), MariaDB, Azure (Microsoft Azure), Azure Certified, Computer & Network Security, Computer Hardware, CyberSecurity, Oracle Database, MongoDB, MySQL, PostgreSQL Andela is Looking for: Contract Duration: 12 months REQUIRED skills: MUST BE AZURE CERTIFIED 8-10 years Sr. Postgres DBA experience Must have successfully migrated on prem into Azure Strong Postgres knowledge along with infrastructure knowledge within Azure Past experience with database migrations and toolsets into Azure Strong clustering and HA experience. Troubleshoot complex database issues in accurate and timely manner. Maintain database disaster recovery procedures to ensure continuous availability and speedy recovery. Ensure databases are deployed according to GBS standards and business requirements. Identify and resolve database issues related to performance and capacity. Ensure database management and maintenance tasks are performed effectively. Ensure ticket SLA expectations are met. Stay updated with new database technologies and analyse such technologies to bring into scope of existing infrastructure. Able to switch between OLTP and OLAP environments Bachelor's degree in engineering and/or related experience of 8-10+ years as DBA experience, expert level experience in at least one database technology platform, multi-platform preferred: MySQL, SQL Server, Oracle, Postgres, Cosmos DB, MongoDB, MariaDB. Multiple platform experience a plus. The applicant will need to have a deep understanding of integrated security and be able to participate in troubleshooting activities. The ability to discuss database related topics with both technical & business audiences. Troubleshoot, investigate, offer and execute resolution to database issues. Monitor and report on database storage utilization. Experience writing and interpreting code in Postgres systems. With the ability to understand what others have developed. Monitor, tune and manage scheduled tasks, backup jobs, recover processes, alerts, and database storage needs in line with firm change control procedures. Perform fault diagnosis, troubleshoot and correct problems at the database and application performance level. Work well in a team environment within the database administration team as well as with other Technical Service Group teams and other departments within the Wolters Kluwer. Provide regular reports on performance and stability of database environment, identifying coming needs proactively to ensure continued reliable service. Document, develop, test and implement updates to database systems. Enjoy constantly learning new technologies and contributing to the knowledge of others. Work outside of regular business hours as required for project or operational work. Experience: Experienced Database professional with at least 8-10+ years of experience in Database Administration. Experience with all aspects of setup, maintenance, troubleshooting, monitoring, and security. Self-motivated, with the proven ability to work independently. Take ownership of and proactively seek to improve on existing systems. Technical Skillsets: Solid Database Administration. Building cloud model servers end to end a plus. On-premises to Cloud DB migrations. Data Security, Backup & Recovery tools. Experience working with Windows server, including Active Directory. Excellent written and verbal communication. Flexible, team player, “get-it-done” personality. Ability to organize and plan work independently. Ability to work in a rapidly changing environment. Management of database environments in cloud solutions. Soft Skills: Past experience supervision staff preferred but not required. Ability to work independently. Team oriented and places the success of the team over their own. Mentors and guides other DBA’s when there are improvement opportunities. Drives their own development. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 month ago
5 - 8 years
0 Lacs
Pune, Maharashtra, India
Hybrid
Infosys Consulting, the management consulting business of Infosys, is looking to hire Data Engineer. You should have between 3 to 10 years of Data Engineering experience. Successful candidates will be part of a specialized customer-facing Advanced Analytics team in our rapidly expanding Automation & Artificial Intelligence Consulting Practice. They will work closely with our AI Consultants and SMEs to define and deliver best-in-class solutions. You will be entrepreneurial, inspired by execution excellence, possess strong design skills creating client deliverables and bid documents, highly analytical, emotionally intelligent, a team player, and only satisfied when your work has a meaningful impact. You will help our clients identify and qualify opportunities to leverage automation to unlock business value and maintain a competitive edge. Infosys Consulting offers exceptional opportunity for personal growth, and, alongside client engagements, you will be recognized for your personal contribution to leading and developing new opportunities, developing thought leadership and service offerings in the domains of artificial intelligence and automation. Responsibilities:You will need to be well spoken and have an easy time establishing productive long lasting working relationships with a large variety of stakeholdersTake the lead on data pipeline design with strong analytical skills and a keen eye for detail to really understand and tackle the challenges businesses are facingYou will be confronted with a large variety of Data Engineer tools and other new technologies as well with a wide variety of IT, compliance, security related issues.Design and develop world-class technology solutions to solve business problems across multiple client engagements.Collaborate with other teams to understand business requirements, client infrastructure, platforms and overall strategy to ensure seamless transitions.Work closely with AI and A team to build world-class solutions and to define AI strategy.You will possess strong logical structuring and problem-solving skills with expert level understanding of database and have an inherent desire to turn data into actions.Strong verbal, written and presentation skills Requirements:3 – 10 years of strong python or Java data engineering experienceExperience in Developing Data Pipelines that process large volumes of data using Python, PySpark, Pandas etc, on AWS.Experience in developing ETL, OLAP based and Analytical Applications.Experience in ingesting batch and streaming data from various data sources.Strong Experience in writing complex SQL using any RDBMS (Oracle, PostgreSQL, SQL Server etc.)Ability to quickly learn and develop expertise in existing highly complex applications and architectures.Exposure to AWS platform's data services (AWS Lambda, Glue, Athena, Redshift, Kinesis etc.)Experience in Airflow DAGS, AWS EMR, S3, IAM and other servicesExperience working on Test cases using pytest/ unit test or any other framework.Snowflake or Redshift data warehousesExperience of DevOps and CD/CD tools.Familiarity with Rest APIsClear and precise communication skillsExperience with CI/CD pipelines, branching strategies, & GIT for code managementComfortable working in Agile projectsBachelor's degree in computer science, information technology, or a similar field Locations Offered:Bengaluru, Hyderabad, Chennai, Noida, Gurgaon, Chandigarh, Pune, Navi Mumbai, Indore, Kolkata, Bhubaneswar, Mysuru, Trivandrum
Posted 1 month ago
4 - 6 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Job Description: Database Administrator (DBA)We are a fast-growing healthtech company building an end-to-end hospital management platform serving public and private hospitals across India. Our cloud-native platform is built on Spring Boot (Java), Angular, and GCP (Google Cloud Platform), with a strong focus on performance, scalability, and compliance (HIPAA, NDHM). We are looking for an experienced and hands-on Database Administrator (DBA) who can work closely with our engineering team to design, optimize, and scale database infrastructure on GCP/AWS and related services.Location:Ahmedabad ( On-site)Experience:4-6 YearsKey Responsibilities:Database Optimization & Performance Tuning: Analyze query performance, optimize indexes, partitions, and schema structure.Database Design: Design scalable, normalized, and efficient relational schemasMultitenancy & Scalability: Architect and implement robust multi-tenant database strategies.Google Cloud Platform (GCP): Work with Google Cloud SQL and Cloud Spanner, and understand IAM, backups, and high availability.Data Security & Compliance: Ensure encryption, RBAC, and support for HIPAA compliance.Collaboration & Development Support: Support Java developers in optimized DB interactions using JPA/Hibernate.Required Skills & Qualifications:5+ years of experience as a production DBA (MySQL/PostgreSQL)Strong expertise in query optimization, indexing, and partitioningDeep understanding of multi-tenant DB design patternsHands-on with GCP/AWS managed database servicesExperience in healthcare domain or transaction-heavy enterprise systemsUnderstanding of schema migration tools (e.g.Liquibase)Basic knowledge of Java backend systems and how ORM worksExcellent problem-solving and communication skills Experience with data archiving strategies and OLAP vs OLTPExposure to Power BI or BI Reporting ToolsNice to Have:Familiar with Spring Boot/JPA/Hibernate interactions with RDBMSFamiliarity with ElasticSearch or NoSQL for read-heavy use casesFamiliarity with FHIR/HL7 healthcare database knowledge Why Join Us?Build the infrastructure of a fast-scaling healthcare platform impacting millions of lives.Work directly with the VP of Engineering and Core Architecture team.Opportunity to contribute to public health systems, government hospitals, and health-tech innovation.Autonomy, ownership, and a learning-first culture. If Interested Please share your resume on aarohi.patel@artemhealthtech.com with below mentioned details: Total Exp:Rel. Exp:Current Company:Current Designation:Current Location:Current Salary:Expected Salary:Official Notice Period:How early you can join:Any Offer in Hand (Mention Package and Location):Reason for Change:
Posted 1 month ago
7 - 10 years
0 Lacs
Chennai
Work from Office
Job Title: Data Architect Oracle Fusion Data Warehouse (DWH) Location: Chennai Job Type: Full-Time Office Role Department: Analytics Reports To: BI Lead We are seeking a highly experienced Data Architect with deep expertise in Oracle Fusion and Enterprise Data Warehousing to lead the design, development, and implementation of our enterprise data architecture. The ideal candidate will have a strong understanding of Oracle Fusion Applications (ERP, HCM, SCM), Oracle Autonomous Data Warehouse (ADW), and best practices for building scalable and secure enterprise data platforms. Key Responsibilities: Lead architecture design and implementation of the enterprise data warehouse using Oracle Fusion data sources. Design and maintain conceptual, logical, and physical data models. Define data integration strategies for extracting data from Oracle Fusion SaaS applications into the enterprise DWH. Collaborate with data engineers, business analysts, and application owners to understand business requirements and translate them into scalable data architecture solutions. Guide the development of ETL/ELT pipelines using tools like Oracle Data Integrator (ODI), Oracle Integration Cloud (OIC), or custom solutions. Ensure data quality, integrity, security, and governance across all data layers. Establish and enforce standards and best practices for data modeling, metadata management, and master data management (MDM). Optimize data warehouse performance, cost, and scalability (especially in ADW/OCI environments). Provide architectural direction during Oracle Fusion implementation/migration projects. Required Qualifications: Bachelor's or Masters degree in Computer Science, Information Systems, or related field. 8+ years of experience in enterprise data architecture with 3+ years focused on Oracle Fusion SaaS and DWH integrations. Proven expertise in Oracle Fusion Applications (ERP, HCM, SCM) and Oracle Autonomous Data Warehouse (ADW). Experience with tools such as ODI, OIC, GoldenGate, or Talend. Strong knowledge of data modeling (star/snowflake), data warehousing, and performance tuning. Familiarity with REST/SOAP APIs and FBDI/AOR data extraction mechanisms in Fusion. Solid understanding of data governance, security, and privacy practices. Excellent problem-solving and communication skills. Preferred Skills: Experience with reporting tools such as Oracle Analytics Cloud (OAC), Power BI, or Tableau. Knowledge of cloud architectures (OCI, AWS, Azure) and hybrid environments. Certification in Oracle Fusion or related cloud technologies is a plus Experience with DevOps/CI-CD pipelines for data workloads.
Posted 1 month ago
6 - 9 years
20 - 25 Lacs
Bengaluru
Hybrid
Company Description Epsilon is the leader in outcome-based marketing. We enable marketing that's built on proof, not promises. Through Epsilon PeopleCloud, the marketing platform for personalizing consumer journeys with performance transparency, Epsilon helps marketers anticipate, activate and prove measurable business outcomes. Powered by CORE ID, the most accurate and stable identity management platform representing 200+ million people, Epsilon's award-winning data and technology is rooted in privacy by design and underpinned by powerful AI. With more than 50 years of experience in personalization and performance working with the world's top brands, agencies and publishers, Epsilon is a trusted partner leading CRM, digital media, loyalty and email programs. Positioned at the core of Publicis Groupe, Epsilon is a global company with over 8,000 employees in over 40 offices around the world. For more information, visit https://www.epsilon.com/apac (APAC). Follow us on Twitter at @EpsilonMktg. Click here to view how Epsilon transforms marketing with 1 View, 1 Vision and 1 Voice. https://www.epsilon.com/apac/youniverse Wondering what it's like to work with Epsilon? Check out this video that captures the spirit of our resilient minds, our values and our great culture. Job Description The Product team forms the crux of our powerful platforms and connects millions of customers to the product magic. This team of innovative thinkers develop and build products that help Epsilon be a market differentiator. They map the future and set new standards for our products, empowered with industry best practices, ML and AI capabilities. The team passionately delivers intelligent end-to-end solutions and plays a key role in Epsilon's success story. Candidate will be the Senior Software Engineer for Business Intelligence team in the Product Engineering group. The Business Intelligence team partners with internal and external clients and technology providers, to develop, implement, and manage state-of-the-art data analytics, business intelligence and data visualization solutions for our marketing products. The Sr Software Engineer will be an individual with strong technical expertise on business intelligence and analytics solutions/tools and work on the BI strategy in terms of toolset selection, report and visualization best practices, team training, and environment efficiency. Why we are looking for you You are an individual with combination of technical leadership and architectural design skills. You have a solid foundation in business intelligence and analytics solutions/tools. You have experience in Product Engineering & Software Development using Tableau and SAP Business Objects, Kibana Dashboard development. You have experience in data integration tools like Databricks. You excel at collaborating with different stakeholders (ERP, CRM, Data Hub and Business stakeholders) to success. You have a strong experience of building reusable database components using SQL queries You enjoy new challenges and are solution oriented. You like mentoring people and enable collaboration of the highest order What you will enjoy in this role As part of the Epsilon Product Engineering team, the pace of the work matches the fast-evolving demands of Fortune 500 clients across the globe. As part of an innovative team that's not afraid to take risks, your ideas will come to life in digital marketing products that support more than 50% automotive dealers in the US. The open and transparent environment that values innovation and efficiency. Exposure to all the different Epsilon Products where reporting plays a key role for the efficient decision-making abilities to the end users. What you will do Work on our BI strategy in terms of toolset selection, report and visualization best practices, team training, and environment efficiency. Analyze requirements and design data analytics and enterprise reporting solutions in various frameworks (such as Tableau, SAP Business Objects, and others) as part of the enterprise, multi-tier, customer-facing applications. Strong technical hands-on to develop data analytics solutions and enterprise reporting solutions in frameworks (such as Tableau, SAP Business Objects, and Kibana). Good to have scripting skills on Python. Build data integration & aggregate pipelines using Databricks. Provide estimates for BI solutions to be developed and deployed. Develop and support cloud infrastructure for BI solutions including automation, process definition and support documentation as required. Work in an agile environment and align with agile / scrum methodology for development work. Follow Data Management processes and procedures and provide input to the creation of data definitions, business rules and data access methods. Collaborate with database administrators and data warehouse architects on data access patterns to optimize data visualization and processing. Assess and come up with infrastructure design for BI solutions catering to system availability and fault tolerance needs. Establish best practices of workloads on multi-tenant deployments. Document solutions and train implementation and operational support teams. Assess gaps in solutions and make recommendations on how to solve the problem. Understand the priorities of various projects and help steer organizational tradeoffs to help focus on the most important initiatives. Show initiative and take responsibility for decisions that impact project and team goals Qualifications BE/ B. Tech/ MCA only, No correspondence course 7+ years of overall technical hands-on experience with good to have supervisory experience Experience in developing BI solutions in enterprise reporting frameworks Experience in designing semantic layer in reporting frameworks and developing reporting model on an OLTP or OLAP environment. Experience working with large data sets, both structured & unstructured, Datawarehouse and Data lakes. Strong knowledge in multitenancy concepts, object, folder and user group templates and user access models in BI reporting tool frameworks, including single sign-on integrations with identity and access management systems such as Okta. Experience in performing periodic sizing, establishing monitoring, backup and restore procedures catering to MTTR and MTBF expectations. Working knowledge of OLTP and relational database concepts and data warehouse concepts/best practices and data modeling Experience in documenting technical design and procedures, reusable artifacts and provide technical guidance as needed. Familiarity with cloud stack (AWS, Azure) & cloud deployments and tools Ability to work on multiple assignments concurrently.
Posted 1 month ago
8 - 10 years
25 - 30 Lacs
Bengaluru
Work from Office
Number of Openings* 1 ECMS Request no in sourcing stage * 525266 Duration of contract* 12 Months Total Yrs. of Experience* 8-10 Yrs. Detailed JD *(Roles and Responsibilities) Manage and maintain NoSQL database systems to ensure optimal performance, Monitor database health and troubleshoot performance issues, Implement and maintain database security measures to protect sensitive data, Collaborate with development teams to design efficient data models, Perform database backups and develop disaster recovery plans. Design, manage, and optimize relational databases, Configure, deploy, and support SQL Server databases, Ensure data security and integrity while managing SQL databases. Analyze and translate business needs into data models, Develop conceptual, logical, and physical data models, Create and enforce database development standards, Validate and reconcile data models to ensure accuracy, Maintain and update existing data models, Mandatory skills* knowledge of OLTP, OLAP Data modeling, NoSQL db.; mongo DB preferred, Desired skills* Should be good at SQL,PL/SQL; experience in MySQL is bonus. Must have interpersonal skills to work with client and understand data model of Insurance systems Domain* Insurance Approx. vendor billing rate excluding service tax* 7588 INR/Day Precise Work Location* (E.g. Bangalore Infosys SEZ or STP) No constraint; Mumbai Bengaluru Pune preferred BG Check ( Before OR After onboarding) Pre-Onboarding Any client prerequisite BGV Agency* NA Is there any working in shifts from standard Daylight (to avoid confusions post onboarding)* IST only
Posted 1 month ago
2 - 6 years
5 - 9 Lacs
Hyderabad
Work from Office
AWS Data Engineer: ***************** As an AWS Data Engineer, you will contribute to our client and will have the below responsibilities: Work with technical development team and team lead to understand desired application capabilities. Candidate would need to do development using application development by lifecycles, & continuous integration/deployment practices. Working to integrate open-source components into data-analytic solutions Willingness to continuously learn & share learnings with others Required: 5+ years of direct applicable experience with key focus: Glue and Python; AWS; Data Pipeline creation Develop code using Python, such as o Developing data pipelines from various external data sources to internal data. Use of Glue for extracting data from the design data base. Developing Python APIs as needed Minimum 3 years of hands-on experience in Amazon Web Services including EC2, VPC, S3, EBS, ELB, Cloud-Front, IAM, RDS, Cloud Watch. Able to interpret business requirements, analyzing, designing and developing application on AWS Cloud and ETL technologies Able to design and architect server less application using AWS Lambda, EMR, and DynamoDB Ability to leverage AWS data migration tools and technologies including Storage Gateway, Database Migration and Import Export services. Understands relational database design, stored procedures, triggers, user-defined functions, SQL jobs. Familiar with CI/CD tools e.g., Jenkins, UCD for Automated application deployments Understanding of OLAP, OLTP, Star Schema, Snow Flake Schema, Logical/Physical/Dimensional Data Modeling. Ability to extract data from multiple operational sources and load into staging, Data warehouse, Data Marts etc. using SCDs (Type 1/Type 2/ Type 3/Hybrid) loads. Familiar with Software Development Life Cycle (SDLC) stages in a Waterfall and Agile environment. Nice to have: Familiar with the use of source control management tools for Branching, Merging, Labeling/Tagging and Integration, such as GIT and SVN. Experience working with UNIX/LINUX environments Hand-on experience with IDEs such as Jupiter Notebook Education & Certification University degree or diploma and applicable years of experience Job Segment Developer, Open Source, Data Warehouse, Cloud, Database, Technology
Posted 1 month ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Amgen Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. About The Role Role Description: Let’s do this. Let’s change the world. We are looking for highly motivated expert Data Engineer who can own the design & development of complex data pipelines, solutions and frameworks. The ideal candidate will be responsible to design, develop, and maintain data pipelines, data integration frameworks, and metadata-driven architectures that enable seamless data access and analytics. This role prefers deep expertise in big data processing, distributed computing, data modeling, and governance frameworks to support self-service analytics, AI-driven insights, and enterprise-wide data management. Roles & Responsibilities: Design, develop, and maintain complex ETL/ELT data pipelines in Databricks using PySpark, Scala, and SQL to process large-scale datasetsUnderstand the biotech/pharma or related domains & build highly efficient data pipelines to migrate and deploy complex data across systemsDesign and Implement solutions to enable unified data access, governance, and interoperability across hybrid cloud environmentsIngest and transform structured and unstructured data from databases (PostgreSQL, MySQL, SQL Server, MongoDB etc.), APIs, logs, event streams, images, pdf, and third-party platformsEnsuring data integrity, accuracy, and consistency through rigorous quality checks and monitoringExpert in data quality, data validation and verification frameworksInnovate, explore and implement new tools and technologies to enhance efficient data processingProactively identify and implement opportunities to automate tasks and develop reusable frameworksWork in an Agile and Scaled Agile (SAFe) environment, collaborating with cross-functional teams, product owners, and Scrum Masters to deliver incremental valueUse JIRA, Confluence, and Agile DevOps tools to manage sprints, backlogs, and user stories.Support continuous improvement, test automation, and DevOps practices in the data engineering lifecycleCollaborate and communicate effectively with the product teams, with cross-functional teams to understand business requirements and translate them into technical solutions Must-Have Skills: Hands-on experience in data engineering technologies such as Databricks, PySpark, SparkSQL Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies.Proficiency in workflow orchestration, performance tuning on big data processing.Strong understanding of AWS servicesAbility to quickly learn, adapt and apply new technologiesStrong problem-solving and analytical skillsExcellent communication and teamwork skillsExperience with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices. Good-to-Have Skills: Data Engineering experience in Biotechnology or pharma industryExperience in writing APIs to make the data available to the consumersExperienced with SQL/NOSQL database, vector database for large language modelsExperienced with data modeling and performance tuning for both OLAP and OLTP databasesExperienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Education and Professional Certifications Minimum 5 to 8 years of Computer Science, IT or related field experienceAWS Certified Data Engineer preferredDatabricks Certificate preferredScaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills.Strong verbal and written communication skillsAbility to work effectively with global, virtual teamsHigh degree of initiative and self-motivation.Ability to manage multiple priorities successfully.Team-oriented, with a focus on achieving team goals.Ability to learn quickly, be organized and detail oriented.Strong presentation and public speaking skills. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.
Posted 1 month ago
0.0 - 10.0 years
0 Lacs
Chennai, Tamil Nadu
On-site
Job Information Date Opened 05/08/2025 Job Type Full time Industry Software Product City Chennai State/Province Tamil Nadu Country India Zip/Postal Code 600017 Job Description Pando is a global leader in supply chain technology, building the world's quickest time-to-value Fulfillment Cloud platform. Pando’s Fulfillment Cloud provides manufacturers, retailers, and 3PLs with a single pane of glass to streamline end-to-end purchase order fulfillment and customer order fulfillment to improve service levels, reduce carbon footprint, and bring down costs. As a partner of choice for Fortune 500 enterprises globally, with a presence across APAC, the Middle East, and the US, Pando is recognized as a Technology Pioneer by the World Economic Forum (WEF), and as one of the fastest growing technology companies by Deloitte. Role As the Senior Lead for AI and Data Warehouse at Pando, you will be responsible for building and scaling the data and AI services team. You will drive the design and implementation of highly scalable, modular, and reusable data pipelines, leveraging big data technologies and low-code implementations. This is a senior leadership position where you will work closely with cross-functional teams to deliver solutions that power advanced analytics, dashboards, and AI-based insights. Key Responsibilities Lead the development of scalable, high-performance data pipelines using PySpark or Big Data ETL pipeline technologies. Drive data modeling efforts for analytics, dashboards, and knowledge graphs. Oversee the implementation of parquet-based data lakes. Work on OLAP databases, ensuring optimal data structure for reporting and querying. Architect and optimize large-scale enterprise big data implementations with a focus on modular and reusable low-code libraries. Collaborate with stakeholders to design and deliver AI and DWH solutions that align with business needs. Mentor and lead a team of engineers, building out the data and AI services organization. Requirements 8-10 years of experience in big data and AI technologies, with expertise in PySpark or similar Big Data ETL pipeline technologies. Strong proficiency in SQL and OLAP database technologies. Firsthand experience with data modeling for analytics, dashboards, and knowledge graphs. Proven experience with parquet-based data lake implementations. Expertise in building highly scalable, high-volume data pipelines. Experience with modular, reusable, low-code-based implementations. Involvement in large-scale enterprise big data implementations. Initiative-taker with strong motivation and the ability to lead a growing team. Preferred Experience leading a team or building out a new department. Experience with cloud-based data platforms and AI services. Familiarity with supply chain technology or fulfilment platforms is a plus. Join us at Pando and lead the transformation of our AI and data services, delivering innovative solutions for global enterprises!
Posted 1 month ago
12 - 22 years
35 - 60 Lacs
Chennai
Hybrid
Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level 5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 8 - 24 Yrs Location- Pan India Job Description : - The Data Modeler will be responsible for the design, development, and maintenance of data models and standards for Enterprise data platforms. Build dimensional data models applying best practices and providing business insights. Build data warehouse & data marts (on Cloud) while performing data profiling and quality analysis. Identify business needs and translate business requirements to Conceptual, Logical, Physical and semantic, multi-dimensional (star, snowflake), normalized/denormalized, Data Vault2.0 model for the project. Knowledge of snowflake and dbt is added advantage. Create and maintain the Source to Target Data Mapping document for this project. Includes documentation of all entities, attributes, data relationships, primary and foreign key structures, allowed values, codes, business rules, glossary terms, etc. Develop best practices for standard naming conventions and coding practices to ensure consistency of data models. Gather and Publish Data Dictionaries: Maintain data models and capture data models from existing databases and record descriptive information. Work with the Development team to implement data strategies, build data flows and develop conceptual data models. Create logical and physical data models using best practices to ensure high data quality and reduced redundancy. Optimize and update logical and physical data models to support new and existing projects. Data profiling, business domain modeling, logical data modeling, physical dimensional data modeling and design. Data design and performance optimization for large Data Warehouse solutions. understanding data - profile and analysis (metadata (formats, definitions, valid values, boundaries), relationship/usage) Create relational and dimensional structures for large (multi-terabyte) operational, analytical, warehouse and BI systems. Should be good in verbal and written communication. If interested please forward your updated resume to sankarspstaffings@gmail.com / Sankar@spstaffing.in With Regards, Sankar G Sr. Executive - IT Recruitment
Posted 1 month ago
12 - 22 years
35 - 60 Lacs
Kolkata
Hybrid
Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level 5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 8 - 24 Yrs Location- Pan India Job Description : - The Data Modeler will be responsible for the design, development, and maintenance of data models and standards for Enterprise data platforms. Build dimensional data models applying best practices and providing business insights. Build data warehouse & data marts (on Cloud) while performing data profiling and quality analysis. Identify business needs and translate business requirements to Conceptual, Logical, Physical and semantic, multi-dimensional (star, snowflake), normalized/denormalized, Data Vault2.0 model for the project. Knowledge of snowflake and dbt is added advantage. Create and maintain the Source to Target Data Mapping document for this project. Includes documentation of all entities, attributes, data relationships, primary and foreign key structures, allowed values, codes, business rules, glossary terms, etc. Develop best practices for standard naming conventions and coding practices to ensure consistency of data models. Gather and Publish Data Dictionaries: Maintain data models and capture data models from existing databases and record descriptive information. Work with the Development team to implement data strategies, build data flows and develop conceptual data models. Create logical and physical data models using best practices to ensure high data quality and reduced redundancy. Optimize and update logical and physical data models to support new and existing projects. Data profiling, business domain modeling, logical data modeling, physical dimensional data modeling and design. Data design and performance optimization for large Data Warehouse solutions. understanding data - profile and analysis (metadata (formats, definitions, valid values, boundaries), relationship/usage) Create relational and dimensional structures for large (multi-terabyte) operational, analytical, warehouse and BI systems. Should be good in verbal and written communication. If interested please forward your updated resume to sankarspstaffings@gmail.com / Sankar@spstaffing.in With Regards, Sankar G Sr. Executive - IT Recruitment
Posted 1 month ago
12 - 22 years
35 - 60 Lacs
Noida
Hybrid
Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level 5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 8 - 24 Yrs Location- Pan India Job Description : - The Data Modeler will be responsible for the design, development, and maintenance of data models and standards for Enterprise data platforms. Build dimensional data models applying best practices and providing business insights. Build data warehouse & data marts (on Cloud) while performing data profiling and quality analysis. Identify business needs and translate business requirements to Conceptual, Logical, Physical and semantic, multi-dimensional (star, snowflake), normalized/denormalized, Data Vault2.0 model for the project. Knowledge of snowflake and dbt is added advantage. Create and maintain the Source to Target Data Mapping document for this project. Includes documentation of all entities, attributes, data relationships, primary and foreign key structures, allowed values, codes, business rules, glossary terms, etc. Develop best practices for standard naming conventions and coding practices to ensure consistency of data models. Gather and Publish Data Dictionaries: Maintain data models and capture data models from existing databases and record descriptive information. Work with the Development team to implement data strategies, build data flows and develop conceptual data models. Create logical and physical data models using best practices to ensure high data quality and reduced redundancy. Optimize and update logical and physical data models to support new and existing projects. Data profiling, business domain modeling, logical data modeling, physical dimensional data modeling and design. Data design and performance optimization for large Data Warehouse solutions. understanding data - profile and analysis (metadata (formats, definitions, valid values, boundaries), relationship/usage) Create relational and dimensional structures for large (multi-terabyte) operational, analytical, warehouse and BI systems. Should be good in verbal and written communication. If interested please forward your updated resume to sankarspstaffings@gmail.com / Sankar@spstaffing.in With Regards, Sankar G Sr. Executive - IT Recruitment
Posted 1 month ago
0.0 - 20.0 years
0 Lacs
Bengaluru, Karnataka
On-site
- 1+ years of data engineering experience - Experience with SQL - Experience with data modeling, warehousing and building ETL pipelines - Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) - Experience with one or more scripting language (e.g., Python, KornShell) Over the past 20 years Amazon has earned the trust of over 300 million customers worldwide by providing unprecedented convenience, selection and value on Amazon.com. By deploying Amazon Pay’s products and services, merchants make it easy for these millions of customers to safely purchase from their third party sites using the information already stored in their Amazon account. In this role, you will lead Data Engineering efforts to drive automation for Amazon Pay organization. You will be part of the data engineering team that will envision, build and deliver high-performance, and fault-tolerant data pipeliens. As a Data Engineer, you will be working with cross-functional partners from Science, Product, SDEs, Operations and leadership to translate raw data into actionable insights for stakeholders, empowering them to make data-driven decisions. Key job responsibilities · Design, implement, and support a platform providing ad-hoc access to large data sets · Interface with other technology teams to extract, transform, and load data from a wide variety of data sources · Implement data structures using best practices in data modeling, ETL/ELT processes, and SQL, Redshift, and OLAP technologies · Model data and metadata for ad-hoc and pre-built reporting · Interface with business customers, gathering requirements and delivering complete reporting solutions · Build robust and scalable data integration (ETL) pipelines using SQL, Python and Spark. · Build and deliver high quality data sets to support business analyst, data scientists, and customer reporting needs. · Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 1 month ago
3 - 6 years
9 - 13 Lacs
Bengaluru
Work from Office
locationsTower 02, Manyata Embassy Business Park, Racenahali & Nagawara Villages. Outer Ring Rd, Bangalore 540065 time typeFull time posted onPosted 5 Days Ago job requisition idR0000388711 About us: As a Fortune 50 company with more than 400,000 team members worldwide, Target is an iconic brand and one of America's leading retailers. Joining Target means promoting a culture of mutual care and respect and striving to make the most meaningful and positive impact. Becoming a Target team member means joining a community that values different voices and lifts each other up. Here, we believe your unique perspective is important, and you'll build relationships by being authentic and respectful. Overview about TII: At Target, we have a timeless purpose and a proven strategy. And that hasnt happened by accident. Some of the best minds from different backgrounds come together at Target to redefine retail in an inclusive learning environment that values people and delivers world-class outcomes. That winning formula is especially apparent in Bengaluru, where Target in India operates as a fully integrated part of Targets global team and has more than 4,000 team members supporting the companys global strategy and operations. Team Overview: Every time a guest enters a Target store or browses Target.com nor the app, they experience the impact of Targets investments in technology and innovation. Were the technologists behind one of the most loved retail brands, delivering joy to millions of our guests, team members, and communities. Join our global in-house technology team of more than 5,000 of engineers, data scientists, architects and product managers striving to make Target the most convenient, safe and joyful place to shop. We use agile practices and leverage open-source software to adapt and build best-in-class technology for our team members and guestsand we do so with a focus on diversity and inclusion, experimentation and continuous learning. AtTarget, we are gearing up for exponential growth and continuously expanding our guest experience. To support this expansion, Data Engineering is building robust warehouses and enhancing existing datasets to meet business needs across the enterprise. We are looking for talented individuals who are passionate about innovative technology, data warehousing and are eager to contribute to data engineering. . Position Overview Assess client needs and convert business requirements into business intelligence (BI) solutions roadmap relating to complex issues involving long-term or multi-work streams. Analyze technical issues and questions identifying data needs and delivery mechanisms Implement data structures using best practices in data modeling, ETL/ELT processes, Spark, Scala, SQL, database, and OLAP technologies Manage overall development cycle, driving best practices and ensuring development of high quality code for common assets and framework components Develop test-driven solutions and provide technical guidance and heavily contribute to a team of high caliber Data Engineers by developing test-driven solutions and BI Applications that can be deployed quickly and in an automated fashion. Manage and execute against agile plans and set deadlines based on client, business, and technical requirements Drive resolution of technology roadblocks including code, infrastructure, build, deployment, and operations Ensure all code adheres to development & security standards About you 4 year degree or equivalent experience 5+ years of software development experience preferably in data engineering/Hadoop development (Hive, Spark etc.) Hands on Experience in Object Oriented or functional programming such as Scala / Java / Python Knowledge or experience with a variety of database technologies (Postgres, Cassandra, SQL Server) Knowledge with design of data integration using API and streaming technologies (Kafka) as well as ETL and other data Integration patterns Experience with cloud platforms like Google Cloud, AWS, or Azure. Hands on Experience on BigQuery will be an added advantage Good understanding of distributed storage(HDFS, Google Cloud Storage, Amazon S3) and processing(Spark, Google Dataproc, Amazon EMR or Databricks) Experience with CI/CD toolchain (Drone, Jenkins, Vela, Kubernetes) a plus Familiarity with data warehousing concepts and technologies. Maintains technical knowledge within areas of expertise Constant learner and team player who enjoys solving tech challenges with global team. Hands on experience in building complex data pipelines and flow optimizations Be able to understand the data, draw insights and make recommendations and be able to identify any data quality issues upfront Experience with test-driven development and software test automation Follow best coding practices & engineering guidelines as prescribed Strong written and verbal communication skills with the ability to present complex technical information in a clear and concise manner to variety of audiences Life at Target- Benefits- Culture-
Posted 1 month ago
4 - 7 years
10 - 15 Lacs
Bengaluru
Work from Office
Develop, test and support future-ready data solutions for customers across industry verticals Develop, test, and support end-to-end batch and near real-time data flows/pipelines Demonstrate understanding in data architectures, modern data platforms, big data, analytics, cloud platforms, data governance and information management and associated technologies Communicates risks and ensures understanding of these risks. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Minimum of 2+ years of related experience required Experience in modeling and business system designs Good hands-on experience on DataStage, Cloud based ETL Services Have great expertise in writing TSQL code Well versed with data warehouse schemas and OLAP techniques Preferred technical and professional experience Ability to manage and make decisions about competing priorities and resources. Ability to delegate where appropriate Must be a strong team player/leader Ability to lead Data transformation project with multiple junior data engineers Strong oral written and interpersonal skills for interacting and throughout all levels of the organization. Ability to clearly communicate complex business problems and technical solutions.
Posted 1 month ago
4 - 7 years
10 - 15 Lacs
Bengaluru
Work from Office
Develop, test and support future-ready data solutions for customers across industry verticals. Develop, test, and support end-to-end batch and near real-time data flows/pipelines. Demonstrate understanding of data architectures, modern data platforms, big data, analytics, cloud platforms, data governance and information management and associated technologies. Communicates risks and ensures understanding of these risks. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Graduate with a minimum of 6+ years of related experience required. Experience in modelling and business system designs. Good hands-on experience on DataStage, Cloud-based ETL Services. Have great expertise in writing TSQL code. Well-versed with data warehouse schemas and OLAP techniques. Preferred technical and professional experience Ability to manage and make decisions about competing priorities and resources. Ability to delegate where appropriate. Must be a strong team player/leader. Ability to lead Data transformation projects with multiple junior data engineers. Strong oral written and interpersonal skills for interacting throughout all levels of the organization. Ability to communicate complex business problems and technical solutions.
Posted 1 month ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Amgen Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. About The Role Role Description: Let’s do this. Let’s change the world. We are looking for highly motivated expert Senior Data Engineer who can own the design & development of complex data pipelines, solutions and frameworks. The ideal candidate will be responsible to design, develop, and optimize data pipelines, data integration frameworks, and metadata-driven architectures that enable seamless data access and analytics. This role prefers deep expertise in big data processing, distributed computing, data modeling, and governance frameworks to support self-service analytics, AI-driven insights, and enterprise-wide data management. Roles & Responsibilities: Design, develop, and maintain scalable ETL/ELT pipelines to support structured, semi-structured, and unstructured data processing across the Enterprise Data Fabric.Implement real-time and batch data processing solutions, integrating data from multiple sources into a unified, governed data fabric architecture.Optimize big data processing frameworks using Apache Spark, Hadoop, or similar distributed computing technologies to ensure high availability and cost efficiency.Work with metadata management and data lineage tracking tools to enable enterprise-wide data discovery and governance.Ensure data security, compliance, and role-based access control (RBAC) across data environments.Optimize query performance, indexing strategies, partitioning, and caching for large-scale data sets.Develop CI/CD pipelines for automated data pipeline deployments, version control, and monitoring.Implement data virtualization techniques to provide seamless access to data across multiple storage systems.Collaborate with cross-functional teams, including data architects, business analysts, and DevOps teams, to align data engineering strategies with enterprise goals.Stay up to date with emerging data technologies and best practices, ensuring continuous improvement of Enterprise Data Fabric architectures. Must-Have Skills: Hands-on experience in data engineering technologies such as Databricks, PySpark, SparkSQL Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies.Proficiency in workflow orchestration, performance tuning on big data processing.Strong understanding of AWS servicesExperience with Data Fabric, Data Mesh, or similar enterprise-wide data architectures.Ability to quickly learn, adapt and apply new technologiesStrong problem-solving and analytical skillsExcellent communication and teamwork skillsExperience with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices. Good-to-Have Skills: Good to have deep expertise in Biotech & Pharma industriesExperience in writing APIs to make the data available to the consumersExperienced with SQL/NOSQL database, vector database for large language modelsExperienced with data modeling and performance tuning for both OLAP and OLTP databasesExperienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Education and Professional Certifications 9 to 12 years of Computer Science, IT or related field experienceAWS Certified Data Engineer preferredDatabricks Certificate preferredScaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills.Strong verbal and written communication skillsAbility to work effectively with global, virtual teamsHigh degree of initiative and self-motivation.Ability to manage multiple priorities successfully.Team-oriented, with a focus on achieving team goals.Ability to learn quickly, be organized and detail oriented.Strong presentation and public speaking skills. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
With the increasing demand for data analysis and business intelligence, OLAP (Online Analytical Processing) jobs have become popular in India. OLAP professionals are responsible for designing, building, and maintaining OLAP databases to support data analysis and reporting activities for organizations. If you are looking to pursue a career in OLAP in India, here is a comprehensive guide to help you navigate the job market.
These cities are known for having a high concentration of IT companies and organizations that require OLAP professionals.
The average salary range for OLAP professionals in India varies based on experience levels. Entry-level professionals can expect to earn around INR 4-6 lakhs per annum, while experienced professionals with 5+ years of experience can earn upwards of INR 12 lakhs per annum.
Career progression in OLAP typically follows a trajectory from Junior Developer to Senior Developer, and then to a Tech Lead role. As professionals gain experience and expertise in OLAP technologies, they may also explore roles such as Data Analyst, Business Intelligence Developer, or Database Administrator.
In addition to OLAP expertise, professionals in this field are often expected to have knowledge of SQL, data modeling, ETL (Extract, Transform, Load) processes, data warehousing concepts, and data visualization tools such as Tableau or Power BI.
As you prepare for OLAP job interviews in India, make sure to hone your technical skills, brush up on industry trends, and showcase your problem-solving abilities. With the right preparation and confidence, you can successfully land a rewarding career in OLAP in India. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2