Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
10.0 - 12.0 years
7 - 11 Lacs
bengaluru
Hybrid
Our Team Customer Success Services (CSS) enable organizations to leverage their Oracle investments to extend into the cloud with greater value, choice, and confidence. Oracle delivers enterprise-grade, end-to-end managed cloud services across its broad portfolio of business applications, middleware, database, and hardware technologies. Customer Success Services (CSS) has industry-leading expertise with the highest customer satisfaction to support customer business every step of the way. Part of our growth strategy, we are recruiting an experienced Technical Account Manager (TAM) with extensive service delivery / operations background with Oracle products. Our Ideal Candidate: Our ideal candidate will typically be expected to demonstrate the following attributes: Good understanding on SaaS ERP and experience with Finance domain knowledge. Good technical skills in Oracle Database and in Fusion Applications Knowledge & experience in Oracle SaaS based applications. Understanding of Technical architecture and cloud architecture. Should have a strong customer facing skills. Ability to multitask, maintain composure in high-stress/high-visibility situations and change priority as needed to accommodate a very dynamic business. Work in rotation Shifts. Excellent team player, willing to learn new technologies & problem-solving skills. Strong organization skills, detail oriented & communication skills. . University degree, with post graduate technical or management qualifications or other relevant experience. OCI Certified / ITIL Foundation Certification in IT Service Management / PMP. Your Qualifications: The candidate should have 11+ years of experience in Oracle products including Technical/Functional and Project/Program Management experience and have a track record in delivering large-scale global Application or infrastructure/database projects.High commitment with his/her customers is must. The role will be based in Bangalore / Hyderabad / Delhi Your Responsibilities Key tasks include, but are not limited to, the following: SCOPE: Manage service delivery activities for customers diversified set of Oracle Products deployed on Cloud & On-Premises. Represent as a single point of contact between customer & Oracle. Manage the service delivery through virtual team of resources. Serve as a product specialist and adviser on HCM and collaborate with other teams as needed Establish priorities & Service growth plans for customers aligned to Oracles Cloud Strategy. Work on improvement initiatives as required ACCOUNTABILITIES Review existing services & contracts and understand the scope thoroughly. Generate & manage service delivery plan, key deliverables, marshal resources as required, RACI, risks, issues and dependencies according to Oracle standards. Deliver upgrade projects within time, scope and budget Deliver regular business and operational reviews to key business stakeholders. Manage and co-ordinate changes in customer environments per customer strategy. RESPONSIBILITIES Service Planning Technology Change Management Contractual and Financial Control Service Governance Problem and Incident Management Issue and Risk Management Escalation Management Best Practice Advice and Recommendations Business Development and Renewals Customer Satisfaction Provide leadership, motivation and direction Responsibilities Technical Account Manager (TAM) Our Team Customer Success Services (CSS) enable organizations to leverage their Oracle investments to extend into the cloud with greater value, choice, and confidence. Oracle delivers enterprise-grade, end-to-end managed cloud services across its broad portfolio of business applications, middleware, database, and hardware technologies. Customer Success Services (CSS) has industry-leading expertise with the highest customer satisfaction to support customer business every step of the way. Part of our growth strategy, we are recruiting an experienced Technical Account Manager (TAM) with extensive service delivery / operations background with Oracle products. Our Ideal Candidate: Our ideal candidate will typically be expected to demonstrate the following attributes: Good understanding on SaaS ERP and experience with Finance domain knowledge. Good technical skills in Oracle Database and in Fusion Applications Knowledge & experience in Oracle SaaS based applications. Understanding of Technical architecture and cloud architecture. Should have a strong customer facing skills. Ability to multitask, maintain composure in high-stress/high-visibility situations and change priority as needed to accommodate a very dynamic business. Work in rotation Shifts. Excellent team player, willing to learn new technologies & problem-solving skills. Strong organization skills, detail oriented & communication skills. . University degree, with post graduate technical or management qualifications or other relevant experience. OCI Certified / ITIL Foundation Certification in IT Service Management / PMP. Your Qualifications: The candidate should have 11+ years of experience in Oracle products including Technical/Functional and Project/Program Management experience and have a track record in delivering large-scale global Application or infrastructure/database projects.High commitment with his/her customers is must. The role will be based in Bangalore / Hyderabad / Delhi Your Responsibilities Key tasks include, but are not limited to, the following: SCOPE: Manage service delivery activities for customers diversified set of Oracle Products deployed on Cloud & On-Premises. Represent as a single point of contact between customer & Oracle. Manage the service delivery through virtual team of resources. Serve as a product specialist and adviser on HCM and collaborate with other teams as needed Establish priorities & Service growth plans for customers aligned to Oracles Cloud Strategy. Work on improvement initiatives as required ACCOUNTABILITIES Review existing services & contracts and understand the scope thoroughly. Generate & manage service delivery plan, key deliverables, marshal resources as required, RACI, risks, issues and dependencies according to Oracle standards. Deliver upgrade projects within time, scope and budget Deliver regular business and operational reviews to key business stakeholders. Manage and co-ordinate changes in customer environments per customer strategy. RESPONSIBILITIES Service Planning Technology Change Management Contractual and Financial Control Service Governance Problem and Incident Management Issue and Risk Management Escalation Management Best Practice Advice and Recommendations Business Development and Renewals Customer Satisfaction Provide leadership, motivation and direction
Posted Date not available
7.0 - 12.0 years
4 - 8 Lacs
bengaluru
Work from Office
AZURE : Databricks(mand), Data factory, data lake, ADB, ADF, ADLS, Spark , Synapse, Streaming, Synapse SQL pools, SQL coding, Synapse pipelines. Hands on experience in SQL and DB based validations. Hands on experience in ETL process. Coding experience in Scala/ Pyspark/Python
Posted Date not available
5.0 - 10.0 years
5 - 9 Lacs
pune
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Microsoft Azure Data Services Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will be responsible for designing, building, and configuring applications to meet business process and application requirements. Your typical day will involve collaborating with teams to develop innovative solutions and ensure seamless application functionality. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Lead the application development process- Implement best practices for application design and development- Conduct code reviews and ensure code quality standards are met Professional & Technical Skills: - Must To Have Skills: Proficiency in Microsoft Azure Data Services, ADF, ADB, Pyspark- Strong understanding of cloud computing principles- Experience with Azure DevOps for continuous integration and deployment- Knowledge of Azure SQL Database and Azure Cosmos DB- Hands-on experience with Azure Functions and Logic Apps Additional Information:- The candidate should have a minimum of 5 years of experience in Microsoft Azure Data Services- This position is based at our Pune office (Kharadi) and 3 days WFO mandatory- A 15 years full-time education is required Qualification 15 years full time education
Posted Date not available
4.0 - 9.0 years
11 - 15 Lacs
gurugram
Work from Office
Primary Responsibilities: Ensures that all the standard requirements have been met and is involved in performing the technical analysis Assisting the project manager by compiling information from the current systems, analyzing the program requirements, and ensuring that it meets the specified time requirements Resolves moderate problems associated with the designed programs and provides technical guidance on complex programming Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Graduate degree or equivalent experience 4+ years of experience in database architecture, engineering, design, optimization, security, and administration; as well as data modeling, big data development, Extract, Transform, and Load (ETL) development, storage engineering, data warehousing, data provisioning Good hands-on experience in Azure, ADF and Databricks Experience in RDBMS like Oracle, SQL Server, Oracle, DB2 etc. Knowledge of Azure, ADF, Databricks, Scala, Airflow, DWH concepts Good knowledge in Unix and shell scripting Good understanding of Extract, Transform, and Load (ETL) Architecture, Cloud, Spark Good understanding of Data Architecture Understanding of the business needs and designing programs and systems that match the complex business requirements and records all the specifications that are involved in the development and coding process Understanding of QA and testing automation process Proven ability to participate in agile development projects for batch and real time data ingestion Proven ability to work with business and peers to define, estimate, and deliver functionality Proven ability to be involved in creating proper technical documentation in the work assignments Preferred Qualifications: Knowledge of Agile, Automation, Big Data, DevOps, Python, - Scala/Pyspark programming, HDFS, Hive, Python, AI/ML Understanding and knowledge of Agile
Posted Date not available
4.0 - 7.0 years
14 - 18 Lacs
gurugram
Work from Office
What will you do: Pricing / Reporting Support Timely and accurate preparation and circulation of Margin Statements (MS) to all relevant stakeholders Evaluation of the margin statements and discussions with business on discount offered/margin earned Strong collaboration with stakeholders to ensure periodic validation of and updates to pricing & cost in system Support in conducting pricing policy training for all stakeholders Support in KPI reporting activities Pricing Controls Support Strict implementation of pricing policy and adherence to DOA while preparing MS Ensure compliance with respect to NPPA and other regulatory frameworks while preparing MS Ensure complete audit trail and documentation of all off-contract adjustments, reconciliation with actual shipment resulting in smooth audit with no major findings Weekly upload of all off-contract adjustments with supporting & approvals in IC hub Collaborate with commercial teams (Sales, CE, Tender, Service) for guidance and strict implementation of pricing policy Ensure audit readiness for all margin statement approvals Automation support for key initiatives Lead discussions with IT team for automation of MS Report and other manual work to reduce manual intervention, find ways to leverage IT tools Partner with ICM and IT team on automation of pricing approval workflow Support FP&A team for discussions with IT team on key automation initiatives Partner with IT, Ops, Business and APAC to implement key IT initiatives What you need MS Excel proficiency & experience working with Oracle JDE High attention to detail and accuracy End-to-end ownership and accountability of assigned tasks, monitor own progress in achieving milestones Identify areas for improvement by challenging our usual ways of doing things, ensuring value addition to the process & broadening the innovation focus Minimum experience range from 4 to 7 yrs.
Posted Date not available
15.0 - 20.0 years
18 - 22 Lacs
hyderabad
Work from Office
Project Role :Data Platform Architect Project Role Description : Architects the data platform blueprint and implements the design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :We are looking for a highly skilled Senior Databricks Developer with extensive experience in building and managing modern data platforms on Azure using Lakehouse architecture. The ideal candidate will have strong hands-on experience in PySpark, SQL, and Azure Data Services, and a proven track record in developing scalable and efficient data pipelines.Your typical day will involve collaborating with Integration Architects and Data Architects to ensure seamless integration between systems and data models, while also addressing any challenges that arise during the implementation process. You will engage in discussions with stakeholders to gather requirements and provide insights that drive the overall architecture of the data platform, ensuring it meets the needs of the organization effectively. Roles & Responsibilities:- Design, build, and optimize scalable data pipelines using Databricks, PySpark, and SQL on Azure.- Implement Lakehouse architecture for structured data ingestion, processing, and storage.- Build and manage Delta Lake tables and perform schema evolution and versioning.- Work with Azure Data Lake Storage (ADLS), Azure Data Factory (ADF), and Azure Synapse for data integration and transformation.- Collaborate with data architects, analysts, and business teams to understand requirements and design efficient data solutions.- Optimize performance of large-scale data pipelines and troubleshoot data quality or latency issues.- Contribute to best practices around coding, testing, and data engineering workflows.- Document technical solutions and maintain code repositories.- Expected to be an SME.- Develop and maintain documentation related to the data platform architecture. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Strong hands-on experience with PySpark and advanced SQL for large-scale data processing.- Deep expertise in Databricks platform including notebooks, jobs, and Delta Lake.- Solid experience with Azure cloud services:ADLS Gen2, ADF, Azure Synapse, Key Vault, etc.- Knowledge of Lakehouse architecture concepts, implementation, and governance.- Experience in version control tools like Git and CI/CD pipelines.- Excellent problem-solving and debugging skills.- Strong communication and collaboration abilities across cross-functional teams.- Strong understanding of data integration techniques and best practices.- Experience with cloud-based data solutions and architectures.- Familiarity with data governance frameworks and compliance requirements.- Ability to design scalable and efficient data pipelines.Good to Have:Experience in data quality checks and validation frameworks.Exposure to DevOps and Infrastructure as Code (IaC) in Azure environments.Familiarity with data governance tools like Unity Catalog or Azure Purview.Knowledge of Delta Live Tables (DLT) is a plus. Additional Information:- The candidate should have minimum 5 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education
Posted Date not available
4.0 - 6.0 years
6 - 10 Lacs
chennai
Work from Office
Experience: Minimum 3 Years in Oracle ADF Summary: We are looking for an Application Developer with strong expertise in Oracle ADF to design, build, and configure applications as per business requirements. The role demands independent performance, active collaboration in team discussions, and a problem-solving mindset to ensure application efficiency and a great user experience. Roles & Responsibilities: Work independently and act as a Subject Matter Expert (SME) in Oracle ADF. Participate actively in team discussions and contribute technical insights. Provide effective solutions to work-related challenges. Assist in documenting application processes and workflows. Stay updated with the latest technologies and best practices to enhance applications. Professional & Technical Skills: Must-Have: Proficiency in Oracle ADF. Strong grasp of application development methodologies. Hands-on experience with database management and SQL. Familiarity with HTML, CSS, and JavaScript. Strong troubleshooting and application issue resolution skills. Additional Information: Minimum 3 years of experience in Oracle ADF. 15 years of full-time education is mandatory. Position based at Chennai office (onsite role).
Posted Date not available
8.0 - 13.0 years
8 - 12 Lacs
hyderabad
Work from Office
Skill Oracle Finance Technical Consultant LocationBangalore Notice Period: Immediate . Employment Type Basic requirement More than 8 years of technical development experience with Oracle EBS 12.1/12.2 modules - Oracle Financials and Project Accounting. Expertise in designing and delivering highly scalable solutions and ability to effectively communicate the solution to the customers and to the internal teams. Excellent experience in application modules of Oracle which include GL, AP, AR, FA, PA, CE and exposure in PO, INV, OM and CM, and expertise knowledge of R12 table structure. Good Technical experience in Procure to Pay and Order to Cash life cycles with related Open interfaces and API's. Very good Hands-on experience in writing SQL, PL/SQL Oracle packages, hierarchical queries, Cursor procedures/functions and triggers, Oracle Reports, OAF/ADF, BI publisher Reports, Workflows and Shell scripting. Good knowledge on Conversion, inbound and outbound interface programs using PL/SQL, Interface tables and Oracle Standard API's. Strong knowledge of Application Implementation Methodology AIM, Application Object Library (AOL) implementation standards, Trading Community Architecture and Software Development Life Cycle Should have development experience in the EBS environment in Reports, Interfaces, Conversions, Extensions, Workflow (RICEW), Forms, alerts, profile, valueset, flexfields, lookup, messages, menu, responsibilities, Workflows and APIs (data conversion/3rd party integrations) Experience of Wed ADI, Fixed Assets, Cash Management, Oracle Projects (PA) would be added advantage. Experience of end-user interaction for requirements gathering, understanding customer requirements, and working with multiple groups to coordinate and carry out technical activities which include new development, maintenance, and production support activities. Should have good knowledge of Agile Methodologies Liaise with internal business (such as finance, sourcing, internal audit) and IT groups such as database administration, infrastructure support, development) in performing support activities, system configuration and standard Oracle functionality analysis for enhancement requests. Excellent in quick adaptation and problem solving with client interaction/customer handling, conflict management, escalation handling, analytical and debugging skills.
Posted Date not available
3.0 - 5.0 years
5 - 7 Lacs
chennai
Work from Office
Summary: We are looking for an Application Developer with strong expertise in Oracle ADF to design, build, and configure applications as per business requirements. The role demands independent performance, active collaboration in team discussions, and a problem-solving mindset to ensure application efficiency and a great user experience. Roles & Responsibilities: Work independently and act as a Subject Matter Expert (SME) in Oracle ADF. Participate actively in team discussions and contribute technical insights. Provide effective solutions to work-related challenges. Assist in documenting application processes and workflows. Stay updated with the latest technologies and best practices to enhance applications. Professional & Technical Skills: Must-Have: Proficiency in Oracle ADF. Strong grasp of application development methodologies. Hands-on experience with database management and SQL. Familiarity with HTML, CSS, and JavaScript. Strong troubleshooting and application issue resolution skills. Additional Information: Minimum 3 years of experience in Oracle ADF. 15 years of full-time education is mandatory.
Posted Date not available
4.0 - 7.0 years
5 - 10 Lacs
chennai
Work from Office
Summary: We are looking for an Application Developer with strong expertise in Oracle ADF to design, build, and configure applications as per business requirements. The role demands independent performance, active collaboration in team discussions, and a problem-solving mindset to ensure application efficiency and a great user experience. Roles & Responsibilities: Work independently and act as a Subject Matter Expert (SME) in Oracle ADF. Participate actively in team discussions and contribute technical insights. Provide effective solutions to work-related challenges. Assist in documenting application processes and workflows. Stay updated with the latest technologies and best practices to enhance applications. Professional & Technical Skills: Must-Have: Proficiency in Oracle ADF. Strong grasp of application development methodologies. Hands-on experience with database management and SQL. Familiarity with HTML, CSS, and JavaScript. Strong troubleshooting and application issue resolution skills. Additional Information: Minimum 3 years of experience in Oracle ADF. 15 years of full-time education is mandatory. Position based at Chennai office (onsite role).
Posted Date not available
8.0 - 13.0 years
13 - 17 Lacs
hyderabad
Work from Office
Primary Responsibilities: Ensures that all the standard requirements have been met and is involved in performing the technical analysis Assisting the project manager by compiling information from the current systems, analyzing the program requirements, and ensuring that it meets the specified time requirements Resolves moderate problems associated with the designed programs and provides technical guidance on complex programming Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Graduate degree or equivalent experience 8+ years of experience in database architecture, engineering, design, optimization, security, and administration; as well as data modeling, big data development, Extract, Transform, and Load (ETL) development, storage engineering, data warehousing, data provisioning Good hands-on experience in Azure, ADF and Databricks Experience in RDBMS like Oracle, SQL Server, DB2 etc. Knowledge of Azure, ADF, Databricks, DWH (Netezza), Unix/Linux, Airflow, AI tool usage in delivery Understanding of the business needs and designing programs and systems that match the complex business requirements and records all the specifications that are involved in the development and coding process Understanding of QA and testing automation process Good understanding of Extract, Transform, and Load (ETL) Architecture, Cloud, Spark Good understanding of Data Architecture Proven ability to participate in agile development projects for batch and real time data ingestion Proven ability to work with business and peers to define, estimate, and deliver functionality Proven ability to be involved in creating proper technical documentation in the work assignments Preferred Qualifications: Knowledge of Agile, Automation, Big Data, DevOps, Python, - Scala/Pyspark programming, Power BI, GenAI Understanding and knowledge of Agile
Posted Date not available
7.0 - 11.0 years
14 - 19 Lacs
noida
Work from Office
Data Services Architect to work and lead the design and implementation of data architecture solutions for a logistics enterprise. This role requires a good understanding of canonical architecture patterns, medallion (bronze-silver-gold) data architecture, and end-to-end data processing pipelines and enabling analytics-ready data from raw files stored in AWS S3, using Databricks as the core processing platform. The ideal candidate will collaborate closely with business stakeholders to understand domain knowledge. Key Responsibilities: Canonical Architecture Design Medallion Architecture Implementation (Databricks) Raw Data Mapping and Transformation : Build ingestion pipelines to process raw files from AWS S3 into Bronze layer Map, and transform raw data into structured canonical formats aligned with logistics business rules Implement scalable DevOps pipelines using PySpark, Delta Lake , and Databricks Workflows . Business Engagement: Work closely with business SMEs, operations teams, and product managers to understand logistics processes, business entities, and KPIs. Understand the business requirements into data models and semantic layers. Collaboration & Leadership: Guide data engineers in the development and maintenance of data pipelines. Provide architectural oversight and best practices for data processing and integration. Mandatory Competencies Cloud - Azure - Azure Data Factory (ADF), Azure Databricks, Azure Data Lake Storage, Event Hubs, HDInsight Data Science and Machine Learning - Data Science and Machine Learning - Databricks Big Data - Big Data - Pyspark DevOps/Configuration Mgmt - Cloud Platforms - AWS Beh - Communication and collaboration
Posted Date not available
7.0 - 11.0 years
14 - 19 Lacs
noida
Work from Office
Data Services Architect to work and lead the design and implementation of data architecture solutions for a logistics enterprise. This role requires a good understanding of canonical architecture patterns, medallion (bronze-silver-gold) data architecture, and end-to-end data processing pipelines and enabling analytics-ready data from raw files stored in AWS S3, using Databricks as the core processing platform. The ideal candidate will collaborate closely with business stakeholders to understand domain knowledge. Key Responsibilities: Canonical Architecture Design Medallion Architecture Implementation (Databricks) Raw Data Mapping and Transformation : Build ingestion pipelines to process raw files from AWS S3 into Bronze layer Map, and transform raw data into structured canonical formats aligned with logistics business rules Implement scalable DevOps pipelines using PySpark, Delta Lake , and Databricks Workflows . Business Engagement: Work closely with business SMEs, operations teams, and product managers to understand logistics processes, business entities, and KPIs. Understand the business requirements into data models and semantic layers. Collaboration & Leadership: Guide data engineers in the development and maintenance of data pipelines. Provide architectural oversight and best practices for data processing and integration. Mandatory Competencies Cloud - Azure - Azure Data Factory (ADF), Azure Databricks, Azure Data Lake Storage, Event Hubs, HDInsight Data Science and Machine Learning - Data Science and Machine Learning - Databricks Big Data - Big Data - Pyspark DevOps/Configuration Mgmt - Cloud Platforms - AWS Beh - Communication and collaboration
Posted Date not available
3.0 - 5.0 years
2 - 5 Lacs
mumbai
Work from Office
Main skill is VBCS with exposure to OIC.( Visual Builder Cloud Service Oracle Integration Cloud) Should have proficiency in OIC and VBCs. Should be good at writing at sql queries. Should have Experience in Maintenance and production support. Should be able to understand e2e system architecture. Good communication skills Should be able to meet SLAs as per agreement Should be flexible to learn new technologies Good to have knowledge of Agile way of working and JIRA usage Location : Mumbai, Bangalore, Hyderabad, Pune, Kolkata
Posted Date not available
2.0 - 5.0 years
5 - 9 Lacs
pune
Work from Office
About The Role Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Microsoft SQL Server, Data Governance Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with various teams to understand their needs, developing solutions, and ensuring that applications are aligned with business objectives. You will engage in problem-solving activities, participate in team meetings, and contribute to the overall success of projects by delivering high-quality applications that enhance operational efficiency. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor project progress and ensure timely delivery of application features. Professional & Technical Skills: - Primary Skills: SQL Server, Stored Proc, ADF/ETL, Functions, Performance tuning, Index Optimization, Data Governance, Data Quality, Data Management, etc.- Secondary Skills: .NET core Web API, Azure, data streaming, caching, Kafka Additional Information:- The candidate should have minimum 8 years of experience in above technologies.- This position is based at our Pune office.- A 15 years full time education is required. Qualification 15 years full time education
Posted Date not available
7.0 - 11.0 years
18 - 22 Lacs
hyderabad
Work from Office
About The Role Project Role : Data Platform Architect Project Role Description : Architects the data platform blueprint and implements the design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :We are looking for a highly skilled Senior Databricks Developer with extensive experience in building and managing modern data platforms on Azure using Lakehouse architecture. The ideal candidate will have strong hands-on experience in PySpark, SQL, and Azure Data Services, and a proven track record in developing scalable and efficient data pipelines.Your typical day will involve collaborating with Integration Architects and Data Architects to ensure seamless integration between systems and data models, while also addressing any challenges that arise during the implementation process. You will engage in discussions with stakeholders to gather requirements and provide insights that drive the overall architecture of the data platform, ensuring it meets the needs of the organization effectively. Roles & Responsibilities:- Design, build, and optimize scalable data pipelines using Databricks, PySpark, and SQL on Azure.- Implement Lakehouse architecture for structured data ingestion, processing, and storage.- Build and manage Delta Lake tables and perform schema evolution and versioning.- Work with Azure Data Lake Storage (ADLS), Azure Data Factory (ADF), and Azure Synapse for data integration and transformation.- Collaborate with data architects, analysts, and business teams to understand requirements and design efficient data solutions.- Optimize performance of large-scale data pipelines and troubleshoot data quality or latency issues.- Contribute to best practices around coding, testing, and data engineering workflows.- Document technical solutions and maintain code repositories.- Expected to be an SME.- Develop and maintain documentation related to the data platform architecture. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Strong hands-on experience with PySpark and advanced SQL for large-scale data processing.- Deep expertise in Databricks platform including notebooks, jobs, and Delta Lake.- Solid experience with Azure cloud services:ADLS Gen2, ADF, Azure Synapse, Key Vault, etc.- Knowledge of Lakehouse architecture concepts, implementation, and governance.- Experience in version control tools like Git and CI/CD pipelines.- Excellent problem-solving and debugging skills.- Strong communication and collaboration abilities across cross-functional teams.- Strong understanding of data integration techniques and best practices.- Experience with cloud-based data solutions and architectures.- Familiarity with data governance frameworks and compliance requirements.- Ability to design scalable and efficient data pipelines.Good to Have:Experience in data quality checks and validation frameworks.Exposure to DevOps and Infrastructure as Code (IaC) in Azure environments.Familiarity with data governance tools like Unity Catalog or Azure Purview.Knowledge of Delta Live Tables (DLT) is a plus. Additional Information:- The candidate should have minimum 5 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education
Posted Date not available
7.0 - 11.0 years
18 - 22 Lacs
hyderabad
Work from Office
About The Role Project Role : Data Platform Architect Project Role Description : Architects the data platform blueprint and implements the design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :We are looking for a highly skilled Senior Databricks Developer with extensive experience in building and managing modern data platforms on Azure using Lakehouse architecture. The ideal candidate will have strong hands-on experience in PySpark, SQL, and Azure Data Services, and a proven track record in developing scalable and efficient data pipelines.Your typical day will involve collaborating with Integration Architects and Data Architects to ensure seamless integration between systems and data models, while also addressing any challenges that arise during the implementation process. You will engage in discussions with stakeholders to gather requirements and provide insights that drive the overall architecture of the data platform, ensuring it meets the needs of the organization effectively. Roles & Responsibilities:- Design, build, and optimize scalable data pipelines using Databricks, PySpark, and SQL on Azure.- Implement Lakehouse architecture for structured data ingestion, processing, and storage.- Build and manage Delta Lake tables and perform schema evolution and versioning.- Work with Azure Data Lake Storage (ADLS), Azure Data Factory (ADF), and Azure Synapse for data integration and transformation.- Collaborate with data architects, analysts, and business teams to understand requirements and design efficient data solutions.- Optimize performance of large-scale data pipelines and troubleshoot data quality or latency issues.- Contribute to best practices around coding, testing, and data engineering workflows.- Document technical solutions and maintain code repositories.- Expected to be an SME.- Develop and maintain documentation related to the data platform architecture. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Strong hands-on experience with PySpark and advanced SQL for large-scale data processing.- Deep expertise in Databricks platform including notebooks, jobs, and Delta Lake.- Solid experience with Azure cloud services:ADLS Gen2, ADF, Azure Synapse, Key Vault, etc.- Knowledge of Lakehouse architecture concepts, implementation, and governance.- Experience in version control tools like Git and CI/CD pipelines.- Excellent problem-solving and debugging skills.- Strong communication and collaboration abilities across cross-functional teams.- Strong understanding of data integration techniques and best practices.- Experience with cloud-based data solutions and architectures.- Familiarity with data governance frameworks and compliance requirements.- Ability to design scalable and efficient data pipelines.Good to Have:Experience in data quality checks and validation frameworks.Exposure to DevOps and Infrastructure as Code (IaC) in Azure environments.Familiarity with data governance tools like Unity Catalog or Azure Purview.Knowledge of Delta Live Tables (DLT) is a plus. Additional Information:- The candidate should have minimum 5 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education
Posted Date not available
7.0 - 11.0 years
18 - 22 Lacs
hyderabad
Work from Office
About The Role Project Role : Data Platform Architect Project Role Description : Architects the data platform blueprint and implements the design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Platform Architect, you will be responsible for architecting the data platform blueprint and implementing the design, which includes various relevant data platform components. Extensive experience in building and managing modern data platforms on Azure using Lakehouse architecture. The ideal candidate will have strong hands-on experience in PySpark, SQL, and Azure Data Services, and a proven track record in developing scalable and efficient data pipelines. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models, while also addressing any challenges that arise during the implementation process. You will engage in discussions with stakeholders to gather requirements and provide insights that drive the overall data strategy of the organization. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Design, build, and optimize scalable data pipelines using Databricks, PySpark, and SQL on Azure.- Implement Lakehouse architecture for structured data ingestion, processing, and storage.- Build and manage Delta Lake tables and perform schema evolution and versioning.- Work with Azure Data Lake Storage (ADLS), Azure Data Factory (ADF), and Azure Synapse for data integration and transformation.- Collaborate with data architects, analysts, and business teams to understand requirements and design efficient data solutions.- Optimize performance of large-scale data pipelines and troubleshoot data quality or latency issues.- Contribute to best practices around coding, testing, and data engineering workflows.- Develop and maintain documentation related to data architecture and design. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Strong understanding of data integration techniques and best practices.- Experience with cloud-based data solutions and architectures.- Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Strong hands-on experience with PySpark and advanced SQL for large-scale data processing.- Deep expertise in Databricks platform including notebooks, jobs, and Delta Lake.- Solid experience with Azure cloud services:ADLS Gen2, ADF, Azure Synapse, Key Vault, etc.- Knowledge of Lakehouse architecture concepts, implementation, and governance.- Experience in version control tools like Git and CI/CD pipelines.- Excellent problem-solving and debugging skills.- Strong communication and collaboration abilities across cross-functional teams.- Strong understanding of data integration techniques and best practices.- Familiarity with data governance frameworks and compliance requirements.- Ability to design scalable and efficient data pipelines. Additional Information:- The candidate should have good experience in Databricks Unified Data Analytics Platform.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education
Posted Date not available
12.0 - 15.0 years
8 - 12 Lacs
bengaluru
Work from Office
Roles and Responsibilities: Solid experience leading data teams in developing data engineering platforms. Good working knowledge of Azure Data Bricks, Data Bricks Delta Lake, Azure Data Factory (ADF), ADF metadata driven pipelines, Azure DevOps, MS SQL Server, SSMS, Python and PySpark Knowledge of Azure connectivity in general, Azure Key Vault, Azure Functions, Azure integration with Active Directory. Knowledge of Azure Synapse Analytics, Synapse Studio, Azure Functions, ADLS SQL coding to write queries, stored procedures, views, functions etc.SQL management studio DB configuration experience. Experience in SQL Server and SSIS package sETL implementation experience knowledge of Azure SQL,Azure Functions, Azure Data Factory (ADF), ADF metadata driven pipelines, Azure DevOps Contribute to the delivery of data quality reviews including data cleansing where required to ensure integrity and quality. Understand data models, data storage models and data migration to manage data within the organization, for a small to medium-sized project. Resolving escalated design and implementation issues with moderate to high complexity. Analyzing the latest industry trends such as cloud computing and distributed processing and beginning to infer risks and benefits of their use in business. Work toward adherence to the relevant data engineering and data modelling processes, procedures and standards Design and maintain the overall architecture for the product(s)/platform(s) in alignment with enterprise architecture and platform strategy (including architectural roadmaps) Hands-on experience in building platforms and services using private/public cloud IaaS, PaaS, and SaaS platforms
Posted Date not available
5.0 - 8.0 years
5 - 9 Lacs
mumbai
Work from Office
Roles & Responsibilities: Resource must have 5+ years of hands on experience in Azure Cloud development (ADF + DataBricks) - mandatory Strong in Azure SQL and good to have knowledge on Synapse / Analytics Experience in working on Agile Project and familiar with Scrum/SAFe ceremonies. Good communication skills - Written & Verbal Can work directly with customer Ready to work in 2nd shift Good in communication and flexible Defines, designs, develops and test software components/applications using Microsoft Azure- Data-bricks, ADF, ADL, Hive, Python, Data bricks, SparkSql, PySpark. Expertise in Azure Data Bricks, ADF, ADL, Hive, Python, Spark, PySpark Strong T-SQL skills with experience in Azure SQL DW Experience handling Structured and unstructured datasets Experience in Data Modeling and Advanced SQL techniques Experience implementing Azure Data Factory Pipelines using latest technologies and techniques. Good exposure in Application Development. The candidate should work independently with minimal supervision
Posted Date not available
5.0 - 8.0 years
3 - 6 Lacs
chennai
Work from Office
Mandatory Skills: Azure DevOps, CI/CD Pipelines, Kubernetes, Docker, Cloud Tech stack and ADF, Spark, Data Bricks, Jenkins, Build Java based application, Java Web, GIT, J2E. Requirements -To design and develop automated deployment arrangements by leveraging configuration management technology. -Implementing various development, testing, automation tools, and IT infrastructure. -Selecting and deploying appropriate CI/CD tools. Required Candidate profile -Implementing various development, testing, automation tools, and IT infrastructure. -Selecting and deploying appropriate CI/CD tools.
Posted Date not available
2.0 - 4.0 years
9 - 14 Lacs
hyderabad
Work from Office
Overview We are PepsiCo We believe that acting ethically and responsibly is not only the right thing to do, but also the right thing to do for our business. At PepsiCo, we aim to deliver top-tier financial performance over the long term by integrating sustainability into our business strategy, leaving a positive imprint on society and the environment. We call this Winning with Pep+ Positive . For more information on PepsiCo and the opportunities it holds, visitwww.pepsico.com. PepsiCo Data Analytics & AI Overview With data deeply embedded in our DNA, PepsiCo Data, Analytics and AI (DA&AI) transforms data into consumer delight. We build and organize business-ready data that allows PepsiCos leaders to solve their problems with the highest degree of confidence. Our platform of data products and services ensures data is activated at scale. This enables new revenue streams, deeper partner relationships, new consumer experiences, and innovation across the enterprise. The Data Science Pillar in DA&AI will be the organization where Data Scientist and ML Engineers report to in the broader D+A Organization. Also DS will lead, facilitate and collaborate on the larger DS community in PepsiCo. DS will provide the talent for the development and support of DS component and its life cycle within DA&AI Products. And will support pre-engagement activities as requested and validated by the prioritization framework of DA&AI. Data Scientist-Gurugram and Hyderabad The role will work in developing Machine Learning (ML) and Artificial Intelligence (AI) projects. Specific scope of this role is to develop ML solution in support of ML/AI projects using big analytics toolsets in a CI/CD environment. Analytics toolsets may include DS tools/Spark/Databricks, and other technologies offered by Microsoft Azure or open-source toolsets. This role will also help automate the end-to-end cycle with Machine Learning Services and Pipelines. Responsibilities Delivery of key Advanced Analytics/Data Science projects within time and budget, particularly around DevOps/MLOps and Machine Learning models in scope Collaborate with data engineers and ML engineers to understand data and models and leverage various advanced analytics capabilities Ensure on time and on budget delivery which satisfies project requirements, while adhering to enterprise architecture standards Use big data technologies to help process data and build scaled data pipelines (batch to real time) Automate the end-to-end ML lifecycle with Azure Machine Learning and Azure/AWS/GCP Pipelines. Setup cloud alerts, monitors, dashboards, and logging and troubleshoot machine learning infrastructure Automate ML models deployments Qualifications Minimum 3years of hands-on work experience in data science / Machine learning Minimum 3year of SQL experience Experience in DevOps and Machine Learning (ML) with hands-on experience with one or more cloud service providers BE/BS in Computer Science, Math, Physics, or other technical fields. Data Science Hands on experience and strong knowledge of building machine learning models supervised and unsupervised models Programming Skills Hands-on experience in statistical programming languages like Python and database query languages like SQL Statistics Good applied statistical skills, including knowledge of statistical tests, distributions, regression, maximum likelihood estimators Any Cloud Experience in Databricks and ADF is desirable Familiarity with Spark, Hive, Pigis an added advantage Model deployment experience will be a plus Experience with version control systems like GitHub and CI/CD tools Experience in Exploratory data Analysis Knowledge of ML Ops / DevOps and deploying ML models is required Experience using MLFlow, Kubeflow etc. will be preferred Experience executing and contributing to ML OPS automation infrastructure is good to have Exceptional analytical and problem-solving skills
Posted Date not available
2.0 - 4.0 years
15 - 25 Lacs
hyderabad
Work from Office
Overview Data Science Team works in developing Machine Learning (ML) and Artificial Intelligence (AI) projects. Specific scope of this role is to develop ML solution in support of ML/AI projects using big analytics toolsets in a CI/CD environment. Analytics toolsets may include DS tools/Spark/Databricks, and other technologies offered by Microsoft Azure or open-source toolsets. This role will also help automate the end-to-end cycle with Azure Pipelines. You will be part of a collaborative interdisciplinary team around data, where you will be responsible of our continuous delivery of statistical/ML models. You will work closely with process owners, product owners and final business users. This will provide you the correct visibility and understanding of criticality of your developments. Responsibilities Delivery of key Advanced Analytics/Data Science projects within time and budget, particularly around DevOps/MLOps and Machine Learning models in scope Active contributor to code & development in projects and services Partner with data engineers to ensure data access for discovery and proper data is prepared for model consumption. Partner with ML engineers working on industrialization. Communicate with business stakeholders in the process of service design, training and knowledge transfer. Support large-scale experimentation and build data-driven models. Refine requirements into modelling problems. Influence product teams through data-based recommendations. Research in state-of-the-art methodologies. Create documentation for learnings and knowledge transfer. Create reusable packages or libraries. Ensure on time and on budget delivery which satisfies project requirements, while adhering to enterprise architecture standards Leverage big data technologies to help process data and build scaled data pipelines (batch to real time) Implement end-to-end ML lifecycle with Azure Databricks and Azure Pipelines Automate ML models deployments Qualifications BE/B.Tech in Computer Science, Maths, technical fields. Overall 2-4 years of experience working as a Data Scientist. 2+ years experience building solutions in the commercial or in the supply chain space. 2+ years working in a team to deliver production level analytic solutions. Fluent in git (version control). Understanding of Jenkins, Docker are a plus. Fluent in SQL syntaxis. 2+ years experience in Statistical/ML techniques to solve supervised (regression, classification) and unsupervised problems. 2+ years experience in developing business problem related statistical/ML modeling with industry tools with primary focus on Python or Pyspark development. Data Science Hands on experience and strong knowledge of building machine learning models supervised and unsupervised models. Knowledge of Time series/Demand Forecast models is a plus Programming Skills Hands-on experience in statistical programming languages like Python, Pyspark and database query languages like SQL Statistics Good applied statistical skills, including knowledge of statistical tests, distributions, regression, maximum likelihood estimators Cloud (Azure) Experience in Databricks and ADF is desirable Familiarity with Spark, Hive, Pig is an added advantage Business storytelling and communicating data insights in business consumable format. Fluent in one Visualization tool. Strong communications and organizational skills with the ability to deal with ambiguity while juggling multiple priorities Experience with Agile methodology for team work and analytics product creation. Experience in Reinforcement Learning is a plus. Experience in Simulation and Optimization problems in any space is a plus. Experience with Bayesian methods is a plus. Experience with Causal inference is a plus. Experience with NLP is a plus. Experience with Responsible AI is a plus. Experience with distributed machine learning is a plus Experience in DevOps, hands-on experience with one or more cloud service providers AWS, GCP, Azure(preferred) Model deployment experience is a plus Experience with version control systems like GitHub and CI/CD tools Experience in Exploratory data Analysis Knowledge of ML Ops / DevOps and deploying ML models is preferred Experience using MLFlow, Kubeflow etc. will be preferred Experience executing and contributing to ML OPS automation infrastructure is good to have Exceptional analytical and problem-solving skills Stakeholder engagement-BU, Vendors. Experience building statistical models in the Retail or Supply chain space is a plus
Posted Date not available
8.0 - 13.0 years
10 - 14 Lacs
hyderabad
Work from Office
#Employment Type: Contract Skills Azure Data Factory SQL Azure Blob Azure Logic Apps
Posted Date not available
12.0 - 17.0 years
17 - 22 Lacs
noida
Work from Office
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. As part of our strategic initiative to build a centralized capability around data and cloud engineering, we are establishing a dedicated Azure Cloud Data Engineering practice under RMI – Optum Advisory umbrella. This team will be at the forefront of designing, developing, and deploying scalable data solutions on cloud primarily using Microsoft Azure platform. The practice will serve as a centralized team, driving innovation, standardization, and best practices across cloud-based data initiatives. New hires will play a pivotal role in shaping the future of our data landscape, collaborating with cross-functional teams, clients, and stakeholders to deliver impactful, end-to-end solutions. Primary Responsibilities: Design and implement secure, scalable, and cost-effective cloud data architectures using cloud services such as Azure Data Factory (ADF), Azure Databricks, Azure Storage, Key Vault, Snowflake, Synapse Analytics, MS Fabric/Power BI etc. Define and lead data & cloud strategy, including migration plans, modernization of legacy systems, and adoption of new cloud capabilities Collaborate with clients to understand business requirements and translate them into optimal cloud architecture solutions, balancing performance, security, and cost Evaluate and compare cloud services (e.g., Databricks, Snowflake, Synapse Analytics) and recommend the best-fit solutions based on project needs and organizational goals Lead the full lifecycle of data platform and product implementations, from planning and design to deployment and support Drive cloud migration initiatives, ensuring smooth transition from on-premise systems while engaging and upskilling existing teams Lead and mentor a team of cloud and data engineers, fostering a culture of continuous learning and technical excellence Plan and guide the team in building Proof of Concepts (POCs), exploring new cloud capabilities, and validating emerging technologies Establish and maintain comprehensive documentation for cloud setup processes, architecture decisions, and operational procedures Work closely with internal and external stakeholders to gather requirements, present solutions, and ensure alignment with business objectives Ensure all cloud solutions adhere to security best practices, compliance standards, and governance policies Prepare case studies and share learnings from implementations to build organizational knowledge and improve future projects Building and analyzing data engineering processes and act as an SME to troubleshoot performance issues and suggesting solutions to improve Develop and maintain CI/CD processes using Jenkins, GitHub, Github Actions, Maven etc Building test framework for the Databricks notebook jobs for automated testing before code deployment Continuously explore new Azure services and capabilities; assess their applicability to business needs Create detailed documentation for cloud processes, architecture, and implementation patterns Contribute to full lifecycle project implementations, from design and development to deployment and monitoring Identifies solutions to non-standard requests and problems Mentor and support existing on-prem developers for cloud environment Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Undergraduate degree or equivalent experience 12+ years of overall experience in Data & Analytics engineering 10+ years of solid experience working as an Architect designing data platforms using Azure, Databricks, Snowflake, ADF, Data Lake, Synapse Analytics, Power BI etc. 10+ years of experience working with data platform or product using PySpark and Spark-SQL In-depth experience designing complex Azure architecture for various business needs & ability to come up with efficient design & solutions Solid experience with CICD tools such as Jenkins, GitHub, Github Actions, Maven etc. Experience in leading team and people management Highly proficient and hands-on experience with Azure services, Databricks/Snowflake development etc. Excellent communication and stakeholder management skills Preferred Qualifications: Snowflake, Airflow experience Power BI development experience Eexperience or knowledge of health care concepts – E&I, M&R, C&S LOBs, Claims, Members, Provider, Payers, Underwriting At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes – an enterprise priority reflected in our mission. #NIC External Candidate Application Internal Employee Application
Posted Date not available
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
54024 Jobs | Dublin
Wipro
24262 Jobs | Bengaluru
Accenture in India
18733 Jobs | Dublin 2
EY
17079 Jobs | London
Uplers
12548 Jobs | Ahmedabad
IBM
11704 Jobs | Armonk
Amazon
11059 Jobs | Seattle,WA
Bajaj Finserv
10656 Jobs |
Accenture services Pvt Ltd
10587 Jobs |
Oracle
10506 Jobs | Redwood City