Jobs
Interviews

954 Olap Jobs - Page 29

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 - 12.0 years

16 - 25 Lacs

Mumbai

Work from Office

• Maintain modular architecture client (Qt, Flutter), mobile app, microservice & API gateway • Architect system integrating WebSocket’s, Kafka & Pinot/Superset • Guide cross-functional team • Integration of BFF layers, API & cross platform deployment Required Candidate profile • Golang, C++, or other languages • React, Flutter, & client-side architectures, REST, WebSocket, GraphQL, Kafka & APIs, Cloud-native design, distributed databases, OLAP tools, CI/CD pipelines, IaC

Posted 2 months ago

Apply

5.0 - 10.0 years

35 - 40 Lacs

Chennai

Work from Office

Job Summary We are seeking a seasoned Data Modeller with deep expertise in designing, implementing, and optimizing data models for both OLTP (Online Transaction Processing) and OLAP (Online Analytical Processing) systems. The ideal candidate will have hands-on experience working with Google Cloud Platform (GCP) database services such as AlloyDB, CloudSQL, and BigQuery, and will leverage strong data modeling and performance tuning skills to deliver scalable, high-performing enterprise data solutions. Key Responsibilities: Design and develop conceptual, logical, and physical data models for OLTP and OLAP systems, ensuring alignment with business goals and technology roadmaps. Architect end-to-end enterprise data warehouse and operational data store solutions with best practices in dimensional and normalized modeling (Inmon, Kimball, 3NF). Implement database optimization strategies including indexing, partitioning, and data sharding to enhance performance, scalability, and availability. Lead cloud migration and modernization initiatives, focusing on GCP services such as AlloyDB, CloudSQL, and BigQuery to support hybrid and cloud-native data architectures. Collaborate closely with data engineers, BI developers, and business stakeholders to translate complex business requirements into robust data models and schemas. Utilize advanced data modeling tools such as DBSchema, ERWin, or Visio to document and manage database schemas and metadata. Establish and enforce data governance, data quality, and metadata management frameworks to ensure data accuracy and compliance. Stay abreast of emerging trends and technologies in data architecture, cloud databases, and analytics to continuously improve data platform capabilities. Required Skills and Qualifications: Bachelors degree in Computer Science, Information Technology, or related field; Masters preferred. 10+ years of experience in data modeling and architecture for OLTP and OLAP environments. Strong proficiency in conceptual, logical, and physical data modeling techniques and methodologies (3NF, Inmon, Kimball). Extensive hands-on experience with data modeling tools such as DBSchema, ERWin, or equivalent. Expertise in indexing, partitioning, query optimization, and data sharding for high-volume transactional and analytical databases. Proven experience designing and optimizing cloud databases on Google Cloud Platform (AlloyDB, CloudSQL, BigQuery). Strong SQL skills with proficiency in PL/SQL, T-SQL, and performance tuning. Familiarity with ETL frameworks and BI tools such as SSIS, Power BI, Tableau, and Azure Data Factory is a plus. Excellent problem-solving skills with keen attention to detail and data quality. Strong communication, collaboration, and stakeholder management skills. Experience or knowledge of financial services or mutual fund industry data models is advantageous. Familiarity with Agile/Scrum methodologies.

Posted 2 months ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

TCS is conducting Virtual Drive on 6th June, 2025 Location: Hyderabad Years of Experience: 5-9 yrs Notice Period: 0/15/30 days Responsibilities: • ideally 5+ years of experience as a data warehouse engineer • Financial sector knowledge is a plus • solid understanding of data warehouse concepts, especially data modelling • proficient in SQL and ETL in an OLAP environment, preferably DB2 (or equivalent Oracle experience) • experience in coding Unix (AIX) shell scripts • comprehensive understanding and ability to apply data engineering techniques, from event streaming and real-time analytics to computational grids and graph processing engines • curious to learn new technologies and practices, reuse strategic platforms and standards, evaluate options, and make decisions with long-term sustainability in mind • strong communicator, from making presentations to technical writing Interested candidate kindly go through the above Job description and share your updated CV. Thanks & Regards Shilpa Silonee Show more Show less

Posted 2 months ago

Apply

0.0 - 2.0 years

2 - 4 Lacs

Mumbai

Work from Office

Who We Are Eide Bailly is one of the top 25 CPA and business advisory firms in the nation. We have over 40 offices in 15 states across the Midwest and western United States and offer our staff and Partners the opportunity to serve a variety of industries. In 2019, we extended our operations to Mumbai, India, and desire to expand our shared services segment there. Founded in 1917, our culture is the foundation of who we are, and we pride ourselves on supporting our employees to help them achieve their goals and pursue their interests both in the office and at home. At Eide Bailly we are passionate about the clients we serve, the work we do, and most importantly, having fun while we do it! Why Youll Love Working Here At Eide Bailly we believe respect is how to treat everyone, not just those you want to impress. Our culture focuses on collaboration to achieve career growth. Innovation is highly encouraged, which is where programs like our EB Xchange originate. This program allows interested tax and audit employees to complete a rotation into a specialty area. We promote happy employees by making work/life balance a priority along with being actively involved in our communities. Our dedication to service can be seen through the Firms decision to match charitable donations made by employees, as well as providing opportunities to volunteer throughout the year. Most importantly, we like to have fun! We offer a professional and fun work environment with frequent lunch and learns, socials, contests, outings and other events. A Typical Day in the Life A typical day as a Data Engineer Associate might include the following: Builds ETL processes to extract data from multiple sources to build and maintain the data lake. Works with service areas to transform data by auditing and staging data for analytic application. Creates data models including but not limited to fact and dimension tables to support reporting technologies such as OLAP cubes, Tableau, Power BI and Alteryx. Uses problem solving and creativity to apply appropriate techniques in the creation of robust, scalable, and reproducible data processing assets. Leverages source control and pipeline tools such as Github, GitLab, or Git for ADO to maintain documentation and controls of what is delivered to the target data engineering environment providing a full description of how the information is delivered and to what environment and stage of development. Provides guidance and reviews systems for security and governance compliance. Ensures timely and accurate performance on assigned projects. Maintains compliance with project budgets, turnaround times, and deadlines. Monitors platform and application stability and initiates incident and escalations to both internal teams and external vendors. Troubleshooting, Root Cause analysis, implementation, and retrospective activities to overcome break-fix incidents and harden data pipelines to be more robust, stable, and scalable. Who You Are Bachelor s degree in Information Systems, Computer Science, a related field, or equivalent work experience. Minor in mathematics, statistics, accounting, finance, or other quantitative discipline, preferred. 2+ years work experience as a data engineer, software developer, or equivalent technology profession. Knowledge of: ETL processes and tools. Data orchestration and preparation best practices. MPP systems such as MS Data Factory & Synapse, Snowflake, Delta Lake. MS Fabric preferred. Streaming technologies, such as Microsoft, Kafka, Kinesis, Lambda and Spark. SDLC Lifecycle. Medallion Lakehouse concept data warehousing model (Bronze > Silver > Gold). Experience in at least one of the following: MS SQL - SQL Server Integration Services, SQL Server Analysis Services, SQL Server Reporting Services. Python Language. Cloud Data technologies - Azure and AWS. Data modeling using the Kimball method (Star schema). Exposure to DevOps tools and concepts to schedule, build, and release deliverables in a controlled and reproducible environment. Comfortable with uncertainty. Projects and assignments will change rapidly, so must be flexible enough to accommodate changing priorities and timelines. Ability to work independently and motivated to take on assigned tasks without hands-on input. Motivation to learn and apply a complex skillset for data engineering or data science. Strong interpersonal skills and can maintain effective working relationships with staff, partners, public and external agencies. Ability to adapt with the project management lifecycle working with program managers, business analysts and other data professionals. Must possess intellectual and analytical curiosity -- initiative to dig into the "why" of various results and a desire to grow responsibility and become a domain expert and strategic thought leader. What to Expect Next: Well be in touch! If you look like the right fit for our position, one of our recruiters will be reaching out to schedule a phone interview with you to learn more about your career interests and goals. In the meantime, we encourage you to check us out on Facebook, Twitter, Instagram, TikTok or our About Us page. A Typical Day in the Life A typical day as a Data Engineer Associate might include the following: Builds ETL processes to extract data from multiple sources to build and maintain the data lake. Works with service areas to transform data by auditing and staging data for analytic application. Creates data models including but not limited to fact and dimension tables to support reporting technologies such as OLAP cubes, Tableau, Power BI and Alteryx. Uses problem solving and creativity to apply appropriate techniques in the creation of robust, scalable, and reproducible data processing assets. Leverages source control and pipeline tools such as Github, GitLab, or Git for ADO to maintain documentation and controls of what is delivered to the target data engineering environment providing a full description of how the information is delivered and to what environment and stage of development. Provides guidance and reviews systems for security and governance compliance. Ensures timely and accurate performance on assigned projects. Maintains compliance with project budgets, turnaround times, and deadlines. Monitors platform and application stability and initiates incident and escalations to both internal teams and external vendors. Troubleshooting, Root Cause analysis, implementation, and retrospective activities to overcome break-fix incidents and harden data pipelines to be more robust, stable, and scalable.

Posted 2 months ago

Apply

0 years

0 Lacs

Udupi, Karnataka, India

On-site

Cloud Leader (Jr. Data Architect) 7+ yrs of IT experience Should have worked on any two Structural (SQL/Oracle/Postgres) and one NoSQL Database Should be able to work with the Presales team, proposing the best solution/architecture Should have design experience on BQ/Redshift/Synapse Manage end-to-end product life cycle, from proposal to delivery, and regularly check with delivery on architecture improvement Should be aware of security protocols for in-transit data, encryption/decryption of PII data Good understanding of analytics tools for effective analysis of data Should have been part of the production deployment team and the Production Support team. Experience with Big Data tools- Hadoop, Spark, Apache Beam, Kafka etc. Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc. Experience in ETL and Data Warehousing. Experience and firm understanding of relational and non-relational databases like MySQL, MS SQL Server, Postgres, MongoDB, Cassandra, etc. Experience with cloud platforms like AWS, GCP, and Azure. Experience with workflow management using tools like Apache Airflow. Preferred Need to be Aware of Design Best Practices for OLTP and OLAP Systems Should be part of the team designing the DB and pipeline Should be able to propose the right architecture, Data Warehouse/Datamesh approaches Should be aware of data sharing and multi-cloud implementation Should have exposure to Load testing methodologies, Debugging pipelines, and Delta load handling Worked on heterogeneous migration projects Experience on multiple Cloud platforms Should have exposure to Load testing methodologies, Debugging pipelines, and Delta load handling Show more Show less

Posted 2 months ago

Apply

2.0 years

0 Lacs

Greater Kolkata Area

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Creating business intelligence from data requires an understanding of the business, the data, and the technology used to store and analyse that data. Using our Rapid Business Intelligence Solutions, data visualisation and integrated reporting dashboards, we can deliver agile, highly interactive reporting and analytics that help our clients to more effectively run their business and understand what business questions can be answered and how to unlock the answers. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities Job Description A minimum of 2 years of working as a MDM consultant or directly with clients leveraging popular tools like Informatica, Reltio etc. A minimum of 2 years in a role that had taken ownership of the data assets for the organization to provide users with high-quality data that is accessible in a consistent manner. A minimum of 2 years facilitating Data cleansing and enrichment through data de-duplication and construction A minimum of 2 years in a role that captures the current state of the system, encompassing processes such as data discovery, profiling, inventories. A minimum of 2 years in a role that defined processes include data classification, business glossary creation and business rule definition. A minimum of 2 years in a role that applied processes with aim to operationalize and ensure compliance with policies and include automating rules, workflows, collaboration etc. Experience in a role that led measurement and monitoring to determine the value generated and include impact analysis, data lineage, proactive monitoring, operational dashboards and business value. Experience performing: Master Data Management Metadata Management Data Management and Integration Systems Development Lifecycle (SDLC) Data Modeling Techniques and Methodologies Database Management Database Technical Design and Build Extract Transform & Load (ETL) Tools Cloud Data Architecture, Data Architecture Principle Online Analytical Processing (OLAP) Data Processes Data Architecture Principles Data Architecture Estimation Mandatory Skill Sets Master Data Management, ETL, Database Management Preferred Skill Sets Master Data Management, ETL, Database Management Years Of Experience Required 2-4 Years Education Qualification BE, B.Tech, MCA, M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Engineering, Bachelor of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Data Management Optional Skills Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date Show more Show less

Posted 2 months ago

Apply

8.0 - 18.0 years

13 - 18 Lacs

Hyderabad

Work from Office

Career Category Information Systems Job Description Join Amgen s Mission of Serving Patients At Amgen, if you feel like you re part of something bigger, it s because you are. Our shared mission to serve patients living with serious illnesses drives all that we do. Since 1980, we ve helped pioneer the world of biotech in our fight against the world s toughest diseases. With our focus on four therapeutic areas -Oncology, Inflammation, General Medicine, and Rare Disease- we reach millions of patients each year. As a member of the Amgen team, you ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Principal Data Engineer What you will do Let s do this. Let s change the world. We are seeking a seasoned Principal Data Engineer to lead the design, development, and implementation of our data strategy. The ideal candidate possesses a deep understanding of data engineering principles, coupled with strong leadership and problem-solving skills. As a Principal Data Engineer, you will architect and oversee the development of robust data platforms, while mentoring and guiding a team of data engineers. Roles Responsibilities: Possesses strong rapid prototyping skills and can quickly translate concepts into working code. Provide expert guidance and mentorship to the data engineering team, fostering a culture of innovation and standard methodologies. Design, develop, and implement robust data architectures and platforms to support business objectives. Oversee the development and optimization of data pipelines, and data integration solutions. Establish and maintain data governance policies and standards to ensure data quality, security, and compliance. Architect and manage cloud-based data solutions, using AWS or other preferred platforms. Lead and motivate an impactful data engineering team to deliver exceptional results. Identify, analyze, and resolve complex data-related challenges. Collaborate closely with business collaborators to understand data requirements and translate them into technical solutions. Stay abreast of emerging data technologies and explore opportunities for innovation. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master s degree and 8 to 10 years of computer science and engineering preferred, other Engineering field is considered OR Bachelor s degree and 10 to 14 years of computer science and engineering preferred, other Engineering field is considered; Diploma and 14 to 18 years of in computer science and engineering preferred, other Engineering field is considered Demonstrated proficiency in using cloud platforms (AWS, Azure, GCP) for data engineering solutions. Strong understanding of cloud architecture principles and cost optimization strategies. Proficient on experience in Python, PySpark, SQL. Handon experience with bid data ETL performance tuning. Proven ability to lead and develop impactful data engineering teams. Strong problem-solving, analytical, and critical thinking skills to address complex data challenges. Preferred Qualifications: Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with Apache Spark, Apache Airflow Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc. ), CI/CD (Jenkins, Maven etc. ), automated unit testing, and Dev Ops Experienced with AWS, GCP or Azure cloud services Professional Certifications AWS Certified Data Engineer preferred Databricks Certificate preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers. amgen. com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. .

Posted 2 months ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description Role Title - Team Lead and Lead Developer – Backend and Database (Node) Role Type - Full time Role Reports to Chief Technology Officer Category - Regular / Fixed Term Job location - 8 th floor, E Block, IITM Research Park, Taramani Job Overview We're seeking an experienced Senior Backend and Database Developer and Team Lead for our backend team. The ideal candidate will combine technical expertise in full-stack development with extensive experience in backend development, with strong process optimization skills and innovative thinking to drive team efficiency and product quality. Job Specifications Educational Qualifications - Any UG/PG graduates Experience - 5+ years Key Job Responsibilities Software architecture design Architect and oversee development of backend in Node Familiarity with MVC and design patterns and have a strong grasp of data structures Basic database theory – ACID vs eventually consistent, OLTP vs OLAP Different types of databases - relational stores, K/V stores, text stores, graph DBs, vector DBs, time series DBs Database design & structures Experience with data modeling concepts including normalization, normal forms, star schema (management and evolution), and dimensional modeling Expertise in SQL DBs (MySQL, PostgreSQL), and NoSQL DBs (MongoDB, Redis) Data pipeline design based on operational principles. Dealing with failures, restarts, reruns, pipeline changes, and various file storage formats Backend & API frameworks & other services To develop and maintain RESTful, JSON RPC and other APIs for various applications Understanding of backend JS frameworks such as Express.js, NestJS , and documentation tools like Postman, Swagger Experience with callbacks with Webhooks, callbacks and other event-driven systems, and third-party solution integrations (Firebase, Google Maps, Amplify and others) QA and testing Automation testing and tooling knowledge for application functionality validation and QA Experience with testing routines and fixes with various testing tools (JMeter, Artillery or others) Load balancers, caching and serving Experience with event serving (Apache Kafka and others), caching and processing (Redux, Apache Spark or other frameworks) and scaling (Kubernetes and other systems) Experience with orchestrators like Airflow for huge data workloads, Scripting and automation for various purposes including scheduling and logging Production, Deployment & Monitoring Experience of CI/CD pipelines with tools like Jenkins/Circle CI, and Docker for containerization Experience in deployment and monitoring of apps on cloud platforms e.g., AWS, Azure and bare metal configurations Documentation, version control and ticketing Version control with Git, and ticketing bugs and features with tools like Jira or Confluence Backend documentation and referencing with tools like Swagger, Postman Experience in creating ERDs for various data types and models and documentation of evolving models Behavioral competencies Attention to detail Ability to maintain accuracy and precision in financial records, reports, and analysis, ensuring compliance with accounting standards and regulations. Integrity and Ethics Commitment to upholding ethical standards, confidentiality, and honesty in financial practices and interactions with stakeholders. Time management Effective prioritization of tasks, efficient allocation of resources, and timely completion of assignments to meet sprint deadlines and achieve goals. Adaptability and Flexibility Capacity to adapt to changing business environments, new technologies, and evolving accounting standards, while remaining flexible in response to unexpected challenges. Communication & collaboration Experience presenting to stakeholders and executive teams Ability to bridge technical and non-technical communication Excellence in written documentation and process guidelines to work with other frontend teams Leadership competencies Team leadership and team building Lead and mentor a backend and database development team, including junior developers, and ensure good coding standards Agile methodology to be followed, Scrum meetings to be conducted for sync-ups Strategic Thinking Ability to develop and implement long-term goals and strategies aligned with the organization’s vision Ability to adopt new tech and being able to handle tech debt to bring the team up to speed with client requirements Decision-Making Capable of making informed and effective decisions, considering both short-term and long-term impacts Insight into resource allocation and sprint building for various projects Team Building Ability to foster a collaborative and inclusive team environment, promoting trust and cooperation among team members Code reviews Troubleshooting, weekly code reviews and feature documentation and versioning, and standards improvement Improving team efficiency Research and integrate AI-powered development tools (GitHub Copilot, Amazon CodeWhisperer) Added advantage points AI/ML applications Experience in AI/ML application backend workflows (e.g: MLFlow) and serving the models Data processing & maintenance Familiarity with at least one data processing platform (e.g., Spark, Flink, Beam/Google Dataflow, AWS Batch) Experience with Elasticsearch and other client-side data processing frameworks Understand data management and analytics – with metadata catalogs (e.g., AWS Glue), data warehousing (e.g., AWS Redshift) Data governance Quality control, policies around data duplication, definitions, company-wide processes around security and privacy Interested candidates can share the updated resumes to below mentioned ID. Contact Person - Janani Santhosh Senior HR Executive Email Id - careers@plenome.com Show more Show less

Posted 2 months ago

Apply

8.0 - 10.0 years

6 - 9 Lacs

Hyderābād

On-site

India - Hyderabad JOB ID: R-216966 ADDITIONAL LOCATIONS: India - Hyderabad WORK LOCATION TYPE: On Site DATE POSTED: Jun. 02, 2025 CATEGORY: Information Systems Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Principal Data Engineer What you will do Let’s do this. Let’s change the world. We are seeking a seasoned Principal Data Engineer to lead the design, development, and implementation of our data strategy. The ideal candidate possesses a deep understanding of data engineering principles, coupled with strong leadership and problem-solving skills. As a Principal Data Engineer, you will architect and oversee the development of robust data platforms, while mentoring and guiding a team of data engineers. Roles & Responsibilities: Possesses strong rapid prototyping skills and can quickly translate concepts into working code. Provide expert guidance and mentorship to the data engineering team, fostering a culture of innovation and standard methodologies. Design, develop, and implement robust data architectures and platforms to support business objectives. Oversee the development and optimization of data pipelines, and data integration solutions. Establish and maintain data governance policies and standards to ensure data quality, security, and compliance. Architect and manage cloud-based data solutions, using AWS or other preferred platforms. Lead and motivate an impactful data engineering team to deliver exceptional results. Identify, analyze, and resolve complex data-related challenges. Collaborate closely with business collaborators to understand data requirements and translate them into technical solutions. Stay abreast of emerging data technologies and explore opportunities for innovation. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master’s degree and 8 to 10 years of computer science and engineering preferred, other Engineering field is considered OR Bachelor’s degree and 10 to 14 years of computer science and engineering preferred, other Engineering field is considered; Diploma and 14 to 18 years of in computer science and engineering preferred, other Engineering field is considered Demonstrated proficiency in using cloud platforms (AWS, Azure, GCP) for data engineering solutions. Strong understanding of cloud architecture principles and cost optimization strategies. Proficient on experience in Python, PySpark, SQL. Handon experience with bid data ETL performance tuning. Proven ability to lead and develop impactful data engineering teams. Strong problem-solving, analytical, and critical thinking skills to address complex data challenges. Preferred Qualifications: Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with Apache Spark, Apache Airflow Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Experienced with AWS, GCP or Azure cloud services Professional Certifications AWS Certified Data Engineer preferred Databricks Certificate preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 2 months ago

Apply

7.0 years

0 Lacs

Chhattisgarh, India

Remote

As a global leader in cybersecurity, CrowdStrike protects the people, processes and technologies that drive modern organizations. Since 2011, our mission hasn’t changed — we’re here to stop breaches, and we’ve redefined modern security with the world’s most advanced AI-native platform. We work on large scale distributed systems, processing almost 3 trillion events per day. We have 3.44 PB of RAM deployed across our fleet of C* servers - and this traffic is growing daily. Our customers span all industries, and they count on CrowdStrike to keep their businesses running, their communities safe and their lives moving forward. We’re also a mission-driven company. We cultivate a culture that gives every CrowdStriker both the flexibility and autonomy to own their careers. We’re always looking to add talented CrowdStrikers to the team who have limitless passion, a relentless focus on innovation and a fanatical commitment to our customers, our community and each other. Ready to join a mission that matters? The future of cybersecurity starts with you. About The Role The charter of the Data + ML Platform team is to harness all the data that is ingested and cataloged within the Data LakeHouse for exploration, insights, model development, ML Engineering and Insights Activation. This team is situated within the larger Data Platform group, which serves as one of the core pillars of our company. We process data at a truly immense scale. Our processing is composed of various facets including threat events collected via telemetry data, associated metadata, along with IT asset information, contextual information about threat exposure based on additional processing, etc. These facets comprise the overall data platform, which is currently over 200 PB and maintained in a hyper scale Data Lakehouse, built and owned by the Data Platform team. The ingestion mechanisms include both batch and near real-time streams that form the core Threat Analytics Platform used for insights, threat hunting, incident investigations and more. As an engineer in this team, you will play an integral role as we build out our ML Experimentation Platform from the ground up. You will collaborate closely with Data Platform Software Engineers, Data Scientists & Threat Analysts to design, implement, and maintain scalable ML pipelines that will be used for Data Preparation, Cataloging, Feature Engineering, Model Training, and Model Serving that influence critical business decisions. You’ll be a key contributor in a production-focused culture that bridges the gap between model development and operational success. Future plans include generative AI investments for use cases such as modeling attack paths for IT assets. What You’ll Do Help design, build, and facilitate adoption of a modern Data+ML platform Modularize complex ML code into standardized and repeatable components Establish and facilitate adoption of repeatable patterns for model development, deployment, and monitoring Build a platform that scales to thousands of users and offers self-service capability to build ML experimentation pipelines Leverage workflow orchestration tools to deploy efficient and scalable execution of complex data and ML pipelines Review code changes from data scientists and champion software development best practices Leverage cloud services like Kubernetes, blob storage, and queues in our cloud first environment What You’ll Need B.S. in Computer Science, Data Science, Statistics, Applied Mathematics, or a related field and 7 + years related experience; or M.S. with 5+ years of experience; or Ph.D with 6+ years of experience. 3+ years experience developing and deploying machine learning solutions to production. Familiarity with typical machine learning algorithms from an engineering perspective (how they are built and used, not necessarily the theory); familiarity with supervised / unsupervised approaches: how, why, and when and labeled data is created and used 3+ years experience with ML Platform tools like Jupyter Notebooks, NVidia Workbench, MLFlow, Ray, Vertex AI etc. Experience building data platform product(s) or features with (one of) Apache Spark, Flink or comparable tools in GCP. Experience with Iceberg is highly desirable. Proficiency in distributed computing and orchestration technologies (Kubernetes, Airflow, etc.) Production experience with infrastructure-as-code tools such as Terraform, FluxCD Expert level experience with Python; Java/Scala exposure is recommended. Ability to write Python interfaces to provide standardized and simplified interfaces for data scientists to utilize internal Crowdstrike tools Expert level experience with CI/CD frameworks such as GitHub Actions Expert level experience with containerization frameworks Strong analytical and problem solving skills, capable of working in a dynamic environment Exceptional interpersonal and communication skills. Work with stakeholders across multiple teams and synthesize their needs into software interfaces and processes. Experience With The Following Is Desirable Go Iceberg Pinot or other time-series/OLAP-style database Jenkins Parquet Protocol Buffers/GRPC VJ1 Benefits Of Working At CrowdStrike Remote-friendly and flexible work culture Market leader in compensation and equity awards Comprehensive physical and mental wellness programs Competitive vacation and holidays for recharge Paid parental and adoption leaves Professional development opportunities for all employees regardless of level or role Employee Resource Groups, geographic neighbourhood groups and volunteer opportunities to build connections Vibrant office culture with world class amenities Great Place to Work Certified™ across the globe CrowdStrike is proud to be an equal opportunity employer. We are committed to fostering a culture of belonging where everyone is valued for who they are and empowered to succeed. We support veterans and individuals with disabilities through our affirmative action program. CrowdStrike is committed to providing equal employment opportunity for all employees and applicants for employment. The Company does not discriminate in employment opportunities or practices on the basis of race, color, creed, ethnicity, religion, sex (including pregnancy or pregnancy-related medical conditions), sexual orientation, gender identity, marital or family status, veteran status, age, national origin, ancestry, physical disability (including HIV and AIDS), mental disability, medical condition, genetic information, membership or activity in a local human rights commission, status with regard to public assistance, or any other characteristic protected by law. We base all employment decisions--including recruitment, selection, training, compensation, benefits, discipline, promotions, transfers, lay-offs, return from lay-off, terminations and social/recreational programs--on valid job requirements. If you need assistance accessing or reviewing the information on this website or need help submitting an application for employment or requesting an accommodation, please contact us at recruiting@crowdstrike.com for further assistance. Show more Show less

Posted 2 months ago

Apply

6.0 years

0 Lacs

India

Remote

Job Description We are looking for a highly capable Senior Full Stack engineer to be a core contributor in developing our suite of product offerings. If you love working on complex problems, and writing clean code, you will love this role. Our goal is to solve a messy problem elegantly and cost effectively. Our job is to collect, categorize, and analyze semi-structured data from different sources (20 million+ products from 500+ websites into our catalog of 500 million+ products). We help our customers discover new patterns in their data that can be leveraged so that they can become more competitive and increase their revenue. Essential Functions: Think like our customers – you will work with product and engineering leaders to define intuitive solutions Designing customer-facing UI and back-end services for various business processes. Developing high-performance applications by writing testable, reusable, and efficient code. Implementing effective security protocols, data protection measures, and storage solutions. Improve the quality of our solutions – you will hold yourself and your team members accountable to writing high quality, well-designed, maintainable software Own your work – you will take responsibility to shepherd your projects from idea through delivery into production Bring new ideas to the table – some of our best innovations originate within the team Guiding and mentoring others on the team Technologies We Use: Languages: NodeJS/NestJS/Typescript, SQL, React/Redux, GraphQL Infrastructure: AWS, Docker, Kubernetes, Terraform, GitHub Actions, ArgoCD Databases: Postgres, MongoDB, Redis, Elasticsearch, Trino, Iceberg Streaming and Queuing: Kafka, NATS, Keda Qualifications 6+ years of professional software engineering/development experience. Proficiency with architecting and delivering solutions within a distributed software platform Full stack engineering experience, including front end frameworks (React/Typescript, Redux) and backend technologies such as NodeJS/NestJS/Typescript, GraphQL Proven ability to learn quickly, make pragmatic decisions, and adapt to changing business needs Proven ability to work and effectively, prioritize and organize your work in a highly dynamic environment Proven track record of working in highly distributed event driven systems. Strong proficiency working of RDMS/NoSQL/Big Data solutions (Postgres, MongoDB, Trino, etc.) Solid understanding of Data Pipeline and Workflow Automation – orchestration tools, scheduling and monitoring Solid understanding of ETL/ELT and OLTP/OLAP concepts Solid understanding of Data Lakes, Data Warehouses, and modeling practices (Data Vault, etc.) and experience leveraging data lake solutions (e.g. AWS Glue, DBT, Trino, Iceberg, etc.) . Ability to clean, transform, and aggregate data using SQL or scripting languages Ability to design and estimate tasks, coordinate work with other team members during iteration planning Solid understanding of AWS, Linux and infrastructure concepts Track record of lifting and challenging teammates to higher levels of achievement Experience measuring, driving and improving the software engineering process Good testing habits and strong eye for quality. Outstanding organizational, communication, and relationship building skills conducive to driving consensus; able to work well in a cross-functional environment Experience working in an agile team environment Ownership – feel a sense of personal accountability/responsibility to drive execution from start to finish. Drive adoption of Wiser's Product Delivery organization principles across the department. Bonus Points Experience with CQRS Experience with Domain Driven Design Experience with C4 modeling Experience working within a retail or ecommerce environment Experience with AI Coding Agents (Windsurf, Cursor, Claude, ChatGPT, etc) – Prompt Engineering Why Join Wiser Solutions? Work on an industry-leading product trusted by top retailers and brands. Be at the forefront of pricing intelligence and data-driven decision-making. A collaborative, fast-paced environment where your impact is tangible. Competitive compensation, benefits, and career growth opportunities. Additional Information EEO STATEMENT Wiser Solutions, Inc. is an Equal Opportunity Employer and prohibits Discrimination, Harassment, and Retaliation of any kind. Wiser Solutions, Inc. is committed to the principle of equal employment opportunity for all employees and applicants, providing a work environment free of discrimination, harassment, and retaliation. All employment decisions at Wiser Solutions, Inc. are based on business needs, job requirements, and individual qualifications, without regard to race, color, religion, sex, national origin, family or parental status, disability, genetics, age, sexual orientation, veteran status, or any other status protected by the state, federal, or local law. Wiser Solutions, Inc. will not tolerate discrimination, harassment, or retaliation based on any of these characteristics. #LI-Remote Show more Show less

Posted 2 months ago

Apply

7.0 years

0 Lacs

Andhra Pradesh, India

Remote

As a global leader in cybersecurity, CrowdStrike protects the people, processes and technologies that drive modern organizations. Since 2011, our mission hasn’t changed — we’re here to stop breaches, and we’ve redefined modern security with the world’s most advanced AI-native platform. We work on large scale distributed systems, processing almost 3 trillion events per day. We have 3.44 PB of RAM deployed across our fleet of C* servers - and this traffic is growing daily. Our customers span all industries, and they count on CrowdStrike to keep their businesses running, their communities safe and their lives moving forward. We’re also a mission-driven company. We cultivate a culture that gives every CrowdStriker both the flexibility and autonomy to own their careers. We’re always looking to add talented CrowdStrikers to the team who have limitless passion, a relentless focus on innovation and a fanatical commitment to our customers, our community and each other. Ready to join a mission that matters? The future of cybersecurity starts with you. About The Role The charter of the Data + ML Platform team is to harness all the data that is ingested and cataloged within the Data LakeHouse for exploration, insights, model development, ML Engineering and Insights Activation. This team is situated within the larger Data Platform group, which serves as one of the core pillars of our company. We process data at a truly immense scale. Our processing is composed of various facets including threat events collected via telemetry data, associated metadata, along with IT asset information, contextual information about threat exposure based on additional processing, etc. These facets comprise the overall data platform, which is currently over 200 PB and maintained in a hyper scale Data Lakehouse, built and owned by the Data Platform team. The ingestion mechanisms include both batch and near real-time streams that form the core Threat Analytics Platform used for insights, threat hunting, incident investigations and more. As an engineer in this team, you will play an integral role as we build out our ML Experimentation Platform from the ground up. You will collaborate closely with Data Platform Software Engineers, Data Scientists & Threat Analysts to design, implement, and maintain scalable ML pipelines that will be used for Data Preparation, Cataloging, Feature Engineering, Model Training, and Model Serving that influence critical business decisions. You’ll be a key contributor in a production-focused culture that bridges the gap between model development and operational success. Future plans include generative AI investments for use cases such as modeling attack paths for IT assets. What You’ll Do Help design, build, and facilitate adoption of a modern Data+ML platform Modularize complex ML code into standardized and repeatable components Establish and facilitate adoption of repeatable patterns for model development, deployment, and monitoring Build a platform that scales to thousands of users and offers self-service capability to build ML experimentation pipelines Leverage workflow orchestration tools to deploy efficient and scalable execution of complex data and ML pipelines Review code changes from data scientists and champion software development best practices Leverage cloud services like Kubernetes, blob storage, and queues in our cloud first environment What You’ll Need B.S. in Computer Science, Data Science, Statistics, Applied Mathematics, or a related field and 7 + years related experience; or M.S. with 5+ years of experience; or Ph.D with 6+ years of experience. 3+ years experience developing and deploying machine learning solutions to production. Familiarity with typical machine learning algorithms from an engineering perspective (how they are built and used, not necessarily the theory); familiarity with supervised / unsupervised approaches: how, why, and when and labeled data is created and used 3+ years experience with ML Platform tools like Jupyter Notebooks, NVidia Workbench, MLFlow, Ray, Vertex AI etc. Experience building data platform product(s) or features with (one of) Apache Spark, Flink or comparable tools in GCP. Experience with Iceberg is highly desirable. Proficiency in distributed computing and orchestration technologies (Kubernetes, Airflow, etc.) Production experience with infrastructure-as-code tools such as Terraform, FluxCD Expert level experience with Python; Java/Scala exposure is recommended. Ability to write Python interfaces to provide standardized and simplified interfaces for data scientists to utilize internal Crowdstrike tools Expert level experience with CI/CD frameworks such as GitHub Actions Expert level experience with containerization frameworks Strong analytical and problem solving skills, capable of working in a dynamic environment Exceptional interpersonal and communication skills. Work with stakeholders across multiple teams and synthesize their needs into software interfaces and processes. Experience With The Following Is Desirable Go Iceberg Pinot or other time-series/OLAP-style database Jenkins Parquet Protocol Buffers/GRPC VJ1 Benefits Of Working At CrowdStrike Remote-friendly and flexible work culture Market leader in compensation and equity awards Comprehensive physical and mental wellness programs Competitive vacation and holidays for recharge Paid parental and adoption leaves Professional development opportunities for all employees regardless of level or role Employee Resource Groups, geographic neighbourhood groups and volunteer opportunities to build connections Vibrant office culture with world class amenities Great Place to Work Certified™ across the globe CrowdStrike is proud to be an equal opportunity employer. We are committed to fostering a culture of belonging where everyone is valued for who they are and empowered to succeed. We support veterans and individuals with disabilities through our affirmative action program. CrowdStrike is committed to providing equal employment opportunity for all employees and applicants for employment. The Company does not discriminate in employment opportunities or practices on the basis of race, color, creed, ethnicity, religion, sex (including pregnancy or pregnancy-related medical conditions), sexual orientation, gender identity, marital or family status, veteran status, age, national origin, ancestry, physical disability (including HIV and AIDS), mental disability, medical condition, genetic information, membership or activity in a local human rights commission, status with regard to public assistance, or any other characteristic protected by law. We base all employment decisions--including recruitment, selection, training, compensation, benefits, discipline, promotions, transfers, lay-offs, return from lay-off, terminations and social/recreational programs--on valid job requirements. If you need assistance accessing or reviewing the information on this website or need help submitting an application for employment or requesting an accommodation, please contact us at recruiting@crowdstrike.com for further assistance. Show more Show less

Posted 2 months ago

Apply

9.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description Join our dynamic Digital Marketing Data Engineering team at Fanatics, where you'll play a critical role in shaping the big data ecosystem that powers our eCommerce and Digital Marketing platforms. As a full-time Staff Data Engineer, you’ll design, build, and optimize scalable data pipelines and architectures, ensuring seamless data flow and effective collection across cross-functional teams. You will also leverage your backend engineering skills to support API integrations and real-time data exchanges. What We're Looking For BTech/MTech/BS/MS in Computer Science or a related field, or equivalent practical experience. 9+ years of software engineering experience, with a strong track record in building data pipelines and big data solutions. At least 5 years of hands-on experience in Data Engineering roles. Proficiency in Big Data technologies such as: Apache Spark, Apache Iceberg, Amazon Redshift, Athena, EMR Apache Airflow, Apache Kafka AWS services (S3, Lambda) Expertise in at least one programming language: Scala, Java, or Python. Strong background in designing and building data models, integrating data from multiple sources, and developing robust ETL/ELT pipelines. Expert-level SQL programming skills. Proven data analysis and data modeling expertise, with the ability to create data-driven insights and effective visualizations. Familiarity with data quality, lineage, and governance frameworks. Energetic, detail-oriented, and collaborative, with a passion for delivering high-quality solutions. Bonus Points Experience in the e-commerce or retail domain. Knowledge of StarRocks or similar OLAP engines. Experience with Web Services, API integrations, third-party data exchanges, and streaming platforms. A passion for building scalable, high-quality analytics platforms and data products. About Us Fanatics is building a leading global digital sports platform. The company ignites the passions of global sports fans and maximizes the presence and reach for hundreds of sports partners globally by offering innovative products and services across Fanatics Commerce, Fanatics Collectibles, and Fanatics Betting & Gaming, allowing sports fans to Buy, Collect and Bet. Through the Fanatics platform, sports fans can buy licensed fan gear, jerseys, lifestyle and streetwear products, headwear, and hardgoods; collect physical and digital trading cards, sports memorabilia, and other digital assets; and bet as the company builds its Sportsbook and iGaming platform. Fanatics has an established database of over 100 million global sports fans, a global partner network with over 900 sports properties, including major national and international professional sports leagues, teams, players associations, athletes, celebrities, colleges, and college conferences, and over 2,000 retail locations, including its Lids retail business stores. As a market leader with more than 18,000 employees, and hundreds of partners, suppliers, and vendors worldwide, we take responsibility for driving toward more ethical and sustainable practices. We are committed to building an inclusive Fanatics community, reflecting and representing society at every level of the business, including our employees, vendors, partners and fans. Fanatics is also dedicated to making a positive impact in the communities where we all live, work, and play through strategic philanthropic initiatives. About The Team Fanatics Commerce is a leading designer, manufacturer, and seller of licensed fan gear, jerseys, lifestyle and streetwear products, headwear, and hardgoods. It operates a vertically-integrated platform of digital and physical capabilities for leading sports leagues, teams, colleges, and associations globally – as well as its flagship site, www.fanaccs.com. Fanatics Commerce has a broad range of online, sports venue, and vertical apparel partnerships worldwide, including comprehensive partnerships with leading leagues, teams, colleges, and sports organizations across the world—including the NFL, NBA, MLB, NHL, MLS, Formula 1, and Australian Football League (AFL); the Dallas Cowboys, Golden State Warriors, Paris Saint-Germain, Manchester United, Chelsea FC, and Tokyo Giants; the University of Notre Dame, University of Alabama, and University of Texas; the International Olympic Committee (IOC), England Rugby, and the Union of European Football Association (UEFA). Show more Show less

Posted 2 months ago

Apply

7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Summary: Oversee and support the design, development and implementation of data warehousing, reporting and analytics solutions that deliver information efficiently, with the end goal to solve business challenges and achieve business objectives Provide oversight and expertise in Data Modeling and Database Design Architect and develop data models, ETL packages, OLAP cubes, and reports utilizing Power BI applying best practices to the development lifecycle Documentation of source-to-target mappings, data dictionaries, and database design Identify areas of improvement to optimize data flows Installation and Administration of Microsoft SQL Server Lead and mentor junior resources Support business development efforts (proposals and client presentations) Identify opportunities to add value to the client's organization by effective use of information management Build relationships with key stakeholders responsible for information and performance management in client's organization Estimate effort and cost for a Microsoft Business Intelligence solution Skills: MS BI Product Suite Power BI MS SQL SSIS SSRS Experience: 7-15 years Education/Qualification: Bachelor’s degree in Computer Science or related field, or equivalent education and experience Requirements: 7+ years of relevant consulting or industry experience 4+ years of hands-on development experience with Microsoft Business Intelligence stack Ability to thrive in a fast-paced, dynamic, client-facing role where delivering solid work products to exceed high expectations is a measure of success Excellent leadership and interpersonal skills Eager to contribute in a team-oriented environment Strong prioritization and multi-tasking skills with a track record of meeting deadlines Ability to be creative and analytical in a problem-solving environment Effective verbal and written communication skills Adaptable to new environments, people, technologies and processes Ability to manage ambiguity and solve undefined problems Demonstrates proficiency in the development and administration at least one of the following areas: Back-end data integration and architecture including dimensional data modeling, database design, data warehousing, ETL development, and performance tuning Front-end reporting and analytics platforms including OLAP cube design, tabular data modeling, Performance Point, Power Pivot, Power View, Power BI Report and Dashboard development Extremely strong SQL skills Foundational knowledge of the following areas: Metadata management Master Data Management Data Governance Data Analytics Deep experience in Kimball and Inmon modeling techniques Show more Show less

Posted 2 months ago

Apply

11.0 - 16.0 years

0 Lacs

Pune/Pimpri-Chinchwad Area

On-site

Job Title: Data Modeler / Data Analyst Location: PAN India Experience: 11- 16 Years Notice Period : 30 Days Max Employment Type: Full-Time Industry: IT Services / Data Engineering / Analytics Job Overview We are looking for a highly experienced and detail-oriented Data Modeler / Data Analyst to join our growing data architecture team. This role is critical to designing and maintaining robust data models that serve as the foundation for enterprise-wide data solutions. The ideal candidate will have strong expertise in data modeling principles, ETL processes, cloud platforms, and enterprise data warehousing technologies. Key Responsibilities Gather and analyze business requirements through JAD sessions and end-user interactions to design data solutions Lead the design, development, and implementation of enterprise-wide data architecture for both structured and unstructured data Build logical, physical, relational, and dimensional data models using industry-standard tools like Erwin Work across teams to ensure seamless data flow from transformation to consumption, including delivery via APIs Perform data analysis, profiling, cleansing, migration, validation, auditing, and manipulation across diverse sources Design ETL specifications, workflows, and flowcharts and work closely with the ETL teams to implement solutions Develop and manage database objects such as tables, packages, stored procedures, and triggers Utilize cloud platforms (AWS, Azure, GCP) for data integration, modeling, and delivery solutions Implement best practices for naming conventions, metadata, and design standards across enterprise models Ensure data consistency, reusability, and alignment with business requirements and governance frameworks Evaluate existing data systems and recommend enhancements or redesigns as needed Collaborate with stakeholders to optimize data warehouse and big data platforms for analytics and reporting Maintain up-to-date data dictionaries and manage model documentation Minimum Qualifications Strong experience in Information Technology as a Data Analyst and Data Modeler Expertise in data modeling tools such as Erwin Solid knowledge of RDBMS (e.g., Teradata, Oracle) and concepts like OLTP, OLAP, Star, and Snowflake schema Hands-on experience in Snowflake data platform Proficiency in developing logical, physical, and dimensional data models Deep understanding of SQL, data warehousing, and integration techniques Experience in ETL process design and working with large data volumes Preferred Qualifications Hands-on experience with Teradata SQL, utilities such as Mload, Tpump, Fastload, and FastExport Strong knowledge of cloud data and analytics services (Azure, AWS, GCP) Certification and project-level experience in Azure and/or AWS cloud ecosystems Excellent understanding of data governance, metadata management, and best practices Ability to troubleshoot and tune SQL queries for performance optimization Experience developing and maintaining metadata repositories and data dictionaries Strong interpersonal and communication skills for stakeholder engagement If you're passionate about data architecture, cloud platforms, and building scalable, high-quality data models—apply now and be part of a team transforming enterprise data strategy. Show more Show less

Posted 2 months ago

Apply

7.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Title : Data Architect Location: Noida, India Data Architecture Design: Design, develop, and maintain the enterprise data architecture, including data models, database schemas, and data flow diagrams. Develop a data strategy and roadmap that aligns with business objectives and ensures the scalability of data systems. Architect both transactional (OLTP) and analytical (OLAP) databases, ensuring optimal performance and data consistency. Data Integration & Management: Oversee the integration of disparate data sources into a unified data platform, leveraging ETL/ELT processes and data integration tools. Design and implement data warehousing solutions, data lakes, and/or data marts that enable efficient storage and retrieval of large datasets. Ensure proper data governance, including the definition of data ownership, security, and privacy controls in accordance with compliance standards (GDPR, HIPAA, etc.). Collaboration with Stakeholders: Work closely with business stakeholders, including analysts, developers, and executives, to understand data requirements and ensure that the architecture supports analytics and reporting needs. Collaborate with DevOps and engineering teams to optimize database performance and support large-scale data processing pipelines. Technology Leadership: Guide the selection of data technologies, including databases (SQL/NoSQL), data processing frameworks (Hadoop, Spark), cloud platforms (Azure is a must), and analytics tools. Stay updated on emerging data management technologies, trends, and best practices, and assess their potential application within the organization. Data Quality & Security: Define data quality standards and implement processes to ensure the accuracy, completeness, and consistency of data across all systems. Establish protocols for data security, encryption, and backup/recovery to protect data assets and ensure business continuity. Mentorship & Leadership: Lead and mentor data engineers, data modelers, and other technical staff in best practices for data architecture and management. Provide strategic guidance on data-related projects and initiatives, ensuring that all efforts are aligned with the enterprise data strategy. Required Skills & Experience: Extensive Data Architecture Expertise: Over 7 years of experience in data architecture, data modeling, and database management. Proficiency in designing and implementing relational (SQL) and non-relational (NoSQL) database solutions. Strong experience with data integration tools (Azure Tools are a must + any other third party tools), ETL/ELT processes, and data pipelines. Advanced Knowledge of Data Platforms: Expertise in Azure cloud data platform is a must. Other platforms such as AWS (Redshift, S3), Azure (Data Lake, Synapse), and/or Google Cloud Platform (BigQuery, Dataproc) is a bonus. Experience with big data technologies (Hadoop, Spark) and distributed systems for large-scale data processing. Hands-on experience with data warehousing solutions and BI tools (e.g., Power BI, Tableau, Looker). Data Governance & Compliance: Strong understanding of data governance principles, data lineage, and data stewardship. Knowledge of industry standards and compliance requirements (e.g., GDPR, HIPAA, SOX) and the ability to architect solutions that meet these standards. Technical Leadership: Proven ability to lead data-driven projects, manage stakeholders, and drive data strategies across the enterprise. Strong programming skills in languages such as Python, SQL, R, or Scala. Certification: Azure Certified Solution Architect, Data Engineer, Data Scientist certifications are mandatory. Pre-Sales Responsibilities: Stakeholder Engagement: Work with product stakeholders to analyze functional and non-functional requirements, ensuring alignment with business objectives. Solution Development: Develop end-to-end solutions involving multiple products, ensuring security and performance benchmarks are established, achieved, and maintained. Proof of Concepts (POCs): Develop POCs to demonstrate the feasibility and benefits of proposed solutions. Client Communication: Communicate system requirements and solution architecture to clients and stakeholders, providing technical assistance and guidance throughout the pre-sales process. Technical Presentations: Prepare and deliver technical presentations to prospective clients, demonstrating how proposed solutions meet their needs and requirements. Additional Responsibilities: Stakeholder Collaboration: Engage with stakeholders to understand their requirements and translate them into effective technical solutions. Technology Leadership: Provide technical leadership and guidance to development teams, ensuring the use of best practices and innovative solutions. Integration Management: Oversee the integration of solutions with existing systems and third-party applications, ensuring seamless interoperability and data flow. Performance Optimization: Ensure solutions are optimized for performance, scalability, and security, addressing any technical challenges that arise. Quality Assurance: Establish and enforce quality assurance standards, conducting regular reviews and testing to ensure robustness and reliability. Documentation: Maintain comprehensive documentation of the architecture, design decisions, and technical specifications. Mentoring: Mentor fellow developers and team leads, fostering a collaborative and growth-oriented environment. Qualifications: Education: Bachelor’s or master’s degree in computer science, Information Technology, or a related field. Experience: Minimum of 7 years of experience in data architecture, with a focus on developing scalable and high-performance solutions. Technical Expertise: Proficient in architectural frameworks, cloud computing, database management, and web technologies. Analytical Thinking: Strong problem-solving skills, with the ability to analyze complex requirements and design scalable solutions. Leadership Skills: Demonstrated ability to lead and mentor technical teams, with excellent project management skills. Communication: Excellent verbal and written communication skills, with the ability to convey technical concepts to both technical and non-technical stakeholders. Show more Show less

Posted 2 months ago

Apply

12.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description Senior Technical Manager, Full Stack – C2 Location : Chennai Employement Type : Permanent Key Responsibilities Solution Design: Architect and design end-to-end data solutions, ensuring scalability, performance, and reliability. Project Leadership: Drive the delivery of the data roadmap, managing timelines, resources, and stakeholder expectations. Data Warehousing: Oversee the development and optimization of SQL warehousing processes and structures. Reporting and Analytics: Lead the creation and maintenance of dashboards and reports using Power BI and SSRS, ensuring data integrity and accessibility. ETL Processes: Design and implement ETL processes using SSIS, ensuring data is accurately transformed and loaded. Collaboration: Work closely with cross-functional teams to gather requirements and translate them into technical specifications. Mentorship: Provide technical guidance and mentorship to team members, fostering a culture of continuous learning. Best Practices: Establish and enforce best practices for data governance, quality, and security. Basic Functions 12+ Years of experience in the enterprise application design, development & support Design, develop, and maintain enterprise BI reports, visualisations and dashboards in Power BI Handling high volume SQL warehouse, Extensive knowledge on MS SQL, SSIS and Visual Report Cloud development and deployment experience on Azure Cloud Services such as Azure SQL, Data Factory and Data Bricks nice to have Synapse Good to have knowledge on Python for handling Big Data using Spark. Source code knowledge on Git,Devops, Coding champion and so on. Responsible for leading detailed design, end-to-end development (front-end and back-end)/unit testing and integration of applications & Design client-side and server-side architecture Should have people management responsibilities. Produce scalable and flexible, high-quality code that satisfies both functional and non-functional requirements. Develop, deploy, test and maintain technical assets in a highly secure and integrated enterprise computing environment & Support functional testing. Cross train & mentor team members for complete knowledge of technologies. Analyze and translate business requirements to technical design with security and data protection settings. Build features and applications with a mobile responsive design. Collaborate/communicate with on-site project team and business users as required. Work with development teams and product managers to ideate software solutions. Manage expectation regarding road blocks, proactively, in the critical path to help ensure successful delivery of the solution Own all projects deliverables and ensure proper communication between teams and quality levels; responsible for end to end solution delivery Comprehend the fundamental solution being developed/deployed – its business value, blueprint, how it fits with the overall architecture, risks, and more. Would be required to provide inputs on solution and design effort to build scalable features /functionality. Essential Functions Multi-disciplinary technologist who enjoys designing and executing Healthcare solutions. Understanding of the US Healthcare value chain and key impact drivers [Payer and/or Provider] Strong problem solving and analytical skills and the ability to “roll up your sleeves” and work with a client to create timely solutions and resolutions Ability to work on multiple product features simultaneously. Quick learner with ability to understand product’s functionality end to end. Opportunity to try out bleeding edge technologies to provide POC, which will be evaluated and put on use if approved. Experience with Strong knowledge of algorithms, design patterns and fundamental computer science concepts Experience working in Agile methodologies (SCRUM) environment and familiar with iterative development cycles. Experience implementing authentication, authorization with OAuth and use of Single Sign On, SAML based authentication. Familiarity with common stacks Primary Internal Interactions Review with the Overall Product Manager & AVP for improvements in the product development lifecycle Assessment meeting with VP & above for additional product development features. Train & Mentor the juniors in the team Primary External Interactions Communicate with onshore stakeholder & Executive Team Members. Help the Product Management Group set the product roadmap & help in identifying future sellable product features. Client Interactions to better understands expectations & steamline solutions. If required should be a bridge between the client and the technology teams. Skills Technical Skills Required Skills – Azure cloud – Azure SQL, Azure Data Factory, Data Bricks Power BI Reports and Self-Service Power BI Reports building Data Analysis Tools – (Online analytical processing (OLAP), ETL frameworks) & ETL Data Load SQL Server 2008 & above: SQL, Stored procedures, functions Must have experience on cloud Azure hosted applications Skills Nice To Have Azure Synapse Experience on Big Data Tools, not limited to – Python, PySpark, HIVE Expertise in US Healthcare Insurance. Experience on mobile application Power BI Report Development Stack overflow account score Technical blogs & technical write-ups Experience in Cloud & NLP Technologies Certifications in Azure, Agile & Waterfall Methodologies Process Specific Skills Business Domain US Healthcare Insurance & Payer Analytics Care Coordination & Care Optimization Population Health Analytics & Risk Management Member 360 View & Analytics Gaps & Compliance Measures Payer Management & Code Classification Management Utilization & Cost Management Soft Skills Understanding of Healthcare business vertical and the business terms within Good analytical skills. Strong communication skills - oral and verbal Ability to work with various stakeholders across various geography Excellent Team player, with the ability to build & sustain teams Should be able to function as an Individual Contributor as well if required. Mentor people and create a high performing organization (foster relationships, resolve conflicts and so on while delivering performance feedback Working Hours General Shift – 11 AM to 8 PM Will be required to extend as per project release needs Education Requirements Master’s or bachelor’s degree from top tier colleges with good grades, preferably from an Engineering Background Show more Show less

Posted 2 months ago

Apply

3.0 - 6.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description Senior Programmer Analyst, SQL – A2/B1 Employment Type: Permanent Location: Chennai Basic Functions 3-6 Years of experience in enterprise application design, development & support. Produce scalable and flexible, high-quality code that satisfies both functional and non-functional requirements. Develop, deploy, test and maintain technical assets in a highly secure and integrated enterprise computing environment & Support functional testing. Handling high volume SQL warehouse, Extensive knowledge on MS SQL, SSIS and Visual Report(SSRS, Power BI) Identify business reporting and analytic needs by collaborating with end users, stakeholders, and technology teams. Design, develop, and maintain enterprise BI reports, visualizations, and dashboards in Power BI Work with the report team to design and implement a reporting user experience that is consistent and intuitive across environments and across report methods. Provide report testing support and execute report and analytics unit testing. Document report design and test case documentation as well as assist with end user documentation Cross train & mentor team members for complete knowledge of technologies. Analyze and translate business requirements to technical design with security and data protection settings. Collaborate/communicate with on-site project team and business users as required. Work with development teams and product managers to ideate software solutions. Own all projects deliverables and ensure proper communication between teams and quality levels; responsible for end to end solution delivery Comprehend the fundamental solution being developed/deployed – its business value, blueprint, how it fits with the overall architecture, risks, and more. Would be required to provide inputs on solution and design effort to build scalable features /functionality. Good to have experience working on Azure Cloud Services such as Azure SQL, Data Factory and Data Bricks nice to have Synapse. Essential Functions Multi-disciplinary technologist who enjoys designing and executing Healthcare solutions. Understanding of the US Healthcare value chain and key impact drivers [Payer and/or Provider] Strong problem solving and analytical skills and the ability to “roll up your sleeves” and work with a client to create timely solutions and resolutions. Ability to work on multiple product features simultaneously. Quick learner with ability to understand product’s functionality end to end. Experience with Strong knowledge of algorithms, design patterns and fundamental computer science concepts Would be responsible for tuning queries for performance, security and stability of the reports. Need to understand the functional aspects of the reports being developed & perform deployment. Detailing and analysis required in Data creation and tableau Reporting. Primary Internal Interactions Review with the Overall Product Manager & AVP for improvements in the product development lifecycle Assessment meeting with VP & above for additional product development features. Train & Mentor, the juniors in the team Primary External Interactions Communicate with onshore stakeholders & Executive Team Members. Help the Product Management Group set up the product roadmap & help in identifying future sellable product features. Client Interactions to better understand expectations & streamline solutions. If required should be a bridge between the client and the technology teams. Skills Technical Skills Required Skills – Power BI Reports and Self-Service Power BI Reports building Data Analysis Tools – (Online analytical processing (OLAP), ETL frameworks) & ETL Data Load, SSIS SQL Server 20012 & above: SQL, Stored procedures, functions SSRS Report Builder Must have experience on cloud Azure hosted applications. Azure Cloud –Data Bricks, Azure SQL, Data Factory & Skills Nice To Have Enterprise business intelligence platform (Tableau) Expertise in US Healthcare Insurance. Part of any open-source contributions Certifications in Agile & Waterfall Methodologies Certifications in Azure Process Specific Skills Business Domain - US Healthcare Insurance & Payer Analytics Care Coordination & Care Optimization Population Health Analytics & Risk Management Member 360 View & Analytics Gaps & Compliance Measures Payer Management & Code Classification Management Utilization & Cost Management Soft Skills Understanding of Healthcare business vertical and the business terms within Good analytical skills. Strong communication skills - oral and verbal Ability to work with various stakeholders across various geography. Excellent Team player, with the ability to build & sustain teams Should be able to function as an Individual Contributor as well if required. Mentor people and create a high performing organization (foster relationships, resolve conflicts and so on while delivering performance feedback Working Hours General Shift – 11 AM to 8 PM Will be required to extend as per project release needs Education Requirements Master’s or Bachelor’s degree from top tier colleges with good grades, preferably from an Engineering Background Show more Show less

Posted 2 months ago

Apply

8.0 - 13.0 years

25 - 40 Lacs

Mumbai

Work from Office

Essential Services: Role & Location fungibility To achieve this, employees at ICICI Bank are expected to be role and location-fungible with the understanding that Banking is an essential service. The role descriptions give you an overview of the responsibilities; it is only directional and guiding in nature. About the Role: As a Data Warehouse Architect, you will be responsible for managing and enhancing data warehouse that manages large volume of customer-life cycle data flowing in from various applications within guardrails of risk and compliance. You will be managing the day-to-day operations of data warehouse i.e. Vertica. In this role responsibility, you will manage a team of data warehouse engineers to develop data modelling, designing ETL data pipeline, issue management, upgrades, performance fine-tuning, migration, governance and security framework of the data warehouse. This role enables the Bank to maintain huge data sets in a structured manner that is amenable for data intelligence. The data warehouse supports numerous information systems used by various business groups to derive insights. As a natural progression, the data warehouses will be gradually migrated to Data Lake enabling better analytical advantage. The role holder will also be responsible for guiding the team towards this migration. Key Responsibilities: Data Pipeline Design: Responsible for designing and developing ETL data pipelines that can help in organising large volumes of data. Use of data warehousing technologies to ensure that the data warehouse is efficient, scalable, and secure. Issue Management: Responsible for ensuring that the data warehouse is running smoothly. Monitor system performance, diagnose and troubleshoot issues, and make necessary changes to optimize system performance. Collaboration: Collaborate with cross-functional teams to implement upgrades, migrations and continuous improvements. Data Integration and Processing: Responsible for processing, cleaning, and integrating large data sets from various sources to ensure that the data is accurate, complete, and consistent. Data Modelling: Responsible for designing and implementing data modelling solutions to ensure that the organizations data is properly structured and organized for analysis. Key Qualifications & Skills: Education Qualification: B.E./B. Tech. in Computer Science, Information Technology or equivalent domain with 10 to 12 years of experience and at least 5 years or relevant work experience in Datawarehouse/ mining/BI/MIS. Experience in Data Warehousing: Knowledge on ETL and data technologies and outline future vision in OLTP, OLAP (Oracle / MS SQL). Data Modelling, Data Analysis and Visualization experience (Analytical tools experience like Power BI / SAS / ClickView / Tableu etc). Good to have exposure to Azure Cloud Data platform services like COSMOS, Azure Data Lake, Azure Synapse, and Azure Data factory. Synergize with the Team: Regular interaction with business/product/functional teams to create mobility solutions. Certification: Azure certified DP 900, PL 300, DP 203 or any other Data platform/Data Analyst certifications.

Posted 2 months ago

Apply

7.0 - 12.0 years

9 - 14 Lacs

Gurugram, Dlf

Work from Office

locationsGurugram - DLF Buildingposted onPosted 10 Days Ago time left to applyEnd DateMay 31, 2025 (10 hours left to apply) job requisition idR_295096 Company: Oliver Wyman Description: Oliver Wyman is a global leader in management consulting. With offices in 60 cities across 29 countries, Oliver Wyman combines deep industry knowledge with specialized expertise in strategy, operations, risk management, and organization transformation. Our 5000 professionals help clients optimize their business, improve their operations, and risk profile, and accelerate their organizational performance to seize the most attractive opportunities. Oliver Wymans thought leadership is evident in our agenda-setting books, white papers, research reports, and articles in the business press. Our clients are the CEOs and executive teams of the top Global 1000 companies. Visit our website for more details about Oliver Wyman Job specification Job title: Senior Data Engineer Department: OWG Tech Office/region: India Reports to: Director of Data Engineering Job Overview: The OWG Technology department is seeking a highly skilled and motivated Senior Data Engineer to play a critical role in our data transformation program. In this position, you will lead major projects and workstreams, collaborating closely with stakeholders to ensure the successful implementation of data solutions. You will also mentor and coach junior team members, providing guidance during sprints as a technical lead. Your expertise in cloud data platforms, particularly the Databricks Lakehouse, will be essential in driving innovation and best practices within the team. Key Responsibilities: Lead the design and implementation of processes to ingest data from various sources into the Databricks Lakehouse platform, ensuring alignment with architectural and engineering standards. Oversee the development, maintenance, and optimization of data models and ETL pipelines that support the Medallion Architecture (Bronze, Silver, Gold layers) to enhance data processing efficiency and facilitate data transformation. Utilize Databricks to integrate, consolidate, and cleanse data, ensuring accuracy and readiness for analysis, while leveraging Delta Lake for versioned data management. Implement and manage Unity Catalog for centralized data governance, ensuring proper data access, security, and compliance with organizational policies and regulations. Collaborate with business analysts, data scientists, and stakeholders to understand their data requirements and deliver tailored solutions that leverage the capabilities of the Databricks Lakehouse platform. Promote available data and analytics capabilities to business stakeholders, educating them on how to effectively leverage these tools and the Medallion Architecture for their analytical needs. Experience: Bachelors or Masters degree in Computer Science, Data Science, Engineering, or a related field. Minimum of 7+ years of experience in data engineering or a related data role, with a proven track record of leading projects and initiatives. Expertisein designing and implementing production-grade Spark-based solutions. Expertisein query tuning, performance tuning, troubleshooting, and debugging Spark or other big data solutions. Proficientin big data technologies such as Spark/Delta, Hadoop, NoSQL, MPP, and OLAP. Proficientin cloud architecture, systems, and principles, particularly in AWS. Proficientin programming languages such as Python, R, Scala, or Java. Expertisein scaling ETL pipelines for performance and cost-effectiveness. Experiencein building and scaling streaming data pipelines. Strong understandingof DevOps tools and best practices for data engineering, including CI/CD, unit and integration testing, automation, and orchestration. Cloud or Databricks certification is highly desirable. Skills and Attributes: Full professional proficiency in both written and spoken English. Strong problem-solving and troubleshooting skills. Excellent communication skills, both verbal and written, with the ability to articulate complex concepts clearly and engage effectively with diverse audiences. Proven ability to lead and mentor junior team members, fostering a collaborative and high-performing team environment. Neutral toward technology, vendor, and product choices, prioritizing results over personal preferences. Resilient and composed in the face of opposition to ideas, demonstrating a collaborative spirit. Lead the migration of existing ETL processes from Informatica IICS and SSIS to cloud-based data pipelines within the Databricks environment, ensuring minimal disruption and maximum efficiency. Act as a technical lead during sprints, providing guidance and support to team members, and ensuring adherence to best practices in data engineering. Engage with clients and stakeholders to support architectural designs, address technical queries, and provide strategic guidance on utilizing the Databricks Lakehouse platform effectively. Stay updated on industry trends and emerging technologies in data engineering, particularly those related to Databricks, cloud data solutions, and ETL migration strategies, continuously enhancing your skills and knowledge. Demonstrate excellent problem-solving skills, with an ability to see and solve issues before they affect business productivity. Demonstrate thought leadership by contributing to the development of best practices, standards, and documentation for data engineering processes within the organization. Why join our team: We help you be your best through professional development opportunities, interesting work and supportive leaders. We foster a vibrant and inclusive culture where you can work with talented colleagues to create new solutions and have impact for colleagues, clients and communities. Our scale enables us to provide a range of career opportunities, as well as benefits and rewards to enhance your well-being. (NYSEMMC) is a global leader in risk, strategy and people, advising clients in 130 countries across four businesses, , and . With annual revenue of $23 billion and more than 85,000 colleagues, Marsh McLennan helps build the confidence to thrive through the power of perspective. For more information, visit , or follow on and . Marsh McLennan is committed to embracing a diverse, inclusive and flexible work environment. We aim to attract and retain the best people and embrace diversity of age, background, caste, disability, ethnic origin, family duties, gender orientation or expression, gender reassignment, marital status, nationality, parental status, personal or social status, political affiliation, race, religion and beliefs, sex/gender, sexual orientation or expression, skin color, or any other characteristic protected by applicable law. Marsh McLennan is committed to hybrid work, which includes the flexibility of working remotely and the collaboration, connections and professional development benefits of working together in the office. All Marsh McLennan colleagues are expected to be in their local office or working onsite with clients at least three days per week. Office-based teams will identify at least one anchor day per week on which their full team will be together in person. Marsh McLennan(NYSEMMC) is a global leader in risk, strategy and people, advising clients in 130 countries across four businessesMarsh, Guy Carpenter, Mercerand Oliver Wyman. With annual revenue of $24 billion and more than 90,000 colleagues, Marsh McLennan helps build the confidence to thrive through the power of perspective. For more information, visit marshmclennan.com, or follow on LinkedInand X. Marsh McLennan is committed to embracing a diverse, inclusive and flexible work environment. We aim to attract and retain the best people and embrace diversity of age, background, caste, disability, ethnic origin, family duties, gender orientation or expression, gender reassignment, marital status, nationality, parental status, personal or social status, political affiliation, race, religion and beliefs, sex/gender, sexual orientation or expression, skin color, or any other characteristic protected by applicable law. Marsh McLennan is committed to hybrid work, which includes the flexibility of working remotely and the collaboration, connections and professional development benefits of working together in the office. All Marsh McLennan colleagues are expected to be in their local office or working onsite with clients at least three days per week. Office-based teams will identify at least one anchor day per week on which their full team will be together in person.

Posted 2 months ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Introduction A career in IBM Consulting embraces long-term relationships and close collaboration with clients across the globe. In this role, you will work for IBM BPO, part of Consulting that, accelerates digital transformation using agile methodologies, process mining, and AI-powered workflows. You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio, including IBM Software and Red Hat. Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role, you'll be supported by mentors and coaches who will encourage you to challenge the norm, investigate ideas outside of your role, and come up with creative solutions resulting in groundbreaking impact for a wide network of clients. Our culture of evolution and empathy centers on long-term career growth and learning opportunities in an environment that embraces your unique skills and experience. Your Role And Responsibilities Develop, test and support future-ready data solutions for customers across industry verticals. Develop, test, and support end-to-end batch and near real-time data flows/pipelines. Demonstrate understanding of data architectures, modern data platforms, big data, analytics, cloud platforms, data governance and information management and associated technologies. Communicates risks and ensures understanding of these risks. Graduate with a minimum of 5+ years of related experience required. Experience in modelling and business system designs. Good hands-on experience on DataStage, Cloud-based ETL Services. Have great expertise in writing TSQL code. Well-versed with data warehouse schemas and OLAP techniques. Preferred Education Master's Degree Required Technical And Professional Expertise Ability to manage and make decisions about competing priorities and resources. Ability to delegate where appropriate. Must be a strong team player/leader. Ability to lead Data transformation projects with multiple junior data engineers. Strong oral written and interpersonal skills for interacting throughout all levels of the organization. Ability to communicate complex business problems and technical solutions. Show more Show less

Posted 2 months ago

Apply

5.0 - 10.0 years

12 - 16 Lacs

Bengaluru

Work from Office

Develop, test and support future-ready data solutions for customers across industry verticals.Develop, test, and support end-to-end batch and near real-time data flows/pipelines.Demonstrate understanding of data architectures, modern data platforms, big data, analytics, cloud platforms, data governance and information management and associated technologies. Communicates risks and ensures understanding of these risks. Graduate with a minimum of 5+ years of related experience required. Experience in modelling and business system designs. Good hands-on experience on DataStage, Cloud-based ETL Services. Have great expertise in writing TSQL code. Well-versed with data warehouse schemas and OLAP techniques. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Ability to manage and make decisions about competing priorities and resources. Ability to delegate where appropriate. Must be a strong team player/leader. Ability to lead Data transformation projects with multiple junior data engineers. Strong oral written and interpersonal skills for interacting throughout all levels of the organization. Ability to communicate complex business problems and technical solutions.

Posted 2 months ago

Apply

40.0 years

0 Lacs

Hyderābād

On-site

India - Hyderabad JOB ID: R-216617 LOCATION: India - Hyderabad WORK LOCATION TYPE: On Site DATE POSTED: Jun. 01, 2025 CATEGORY: Information Systems Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Sr. Data Engineer Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. What you will do Let’s do this. Let’s change the world. We are looking for highly motivated expert Senior Data Engineer who can own the design & development of complex data pipelines, solutions and frameworks. The ideal candidate will be responsible to design, develop, and optimize data pipelines, data integration frameworks, and metadata-driven architectures that enable seamless data access and analytics. This role prefers deep expertise in big data processing, distributed computing, data modeling, and governance frameworks to support self-service analytics, AI-driven insights, and enterprise-wide data management. Roles & Responsibilities: Design, develop, and maintain scalable ETL/ELT pipelines to support structured, semi-structured, and unstructured data processing across the Enterprise Data Fabric. Implement real-time and batch data processing solutions, integrating data from multiple sources into a unified, governed data fabric architecture. Optimize big data processing frameworks using Apache Spark, Hadoop, or similar distributed computing technologies to ensure high availability and cost efficiency. Work with metadata management and data lineage tracking tools to enable enterprise-wide data discovery and governance. Ensure data security, compliance, and role-based access control (RBAC) across data environments. Optimize query performance, indexing strategies, partitioning, and caching for large-scale data sets. Develop CI/CD pipelines for automated data pipeline deployments, version control, and monitoring. Implement data virtualization techniques to provide seamless access to data across multiple storage systems. Collaborate with cross-functional teams, including data architects, business analysts, and DevOps teams, to align data engineering strategies with enterprise goals. Stay up to date with emerging data technologies and best practices, ensuring continuous improvement of Enterprise Data Fabric architectures. What we expect of you We are all different, yet we all use our unique contributions to serve patients. The [vital attribute] professional we seek is a [type of person] with these qualifications. Basic Qualifications: Master’s degree and 3 to 4 + years of Computer Science, IT or related field experience OR Bachelor’s degree and 5 to 8 + years of Computer Science, IT or related field experience OR Diploma and 10 to 12 years of Computer Science, IT or related field experience Must-Have Skills: Hands-on experience in data engineering technologies such as Databricks, PySpark, SparkSQL Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies. Proficiency in workflow orchestration, performance tuning on big data processing. Strong understanding of AWS services Experience with Data Fabric, Data Mesh, or similar enterprise-wide data architectures. Ability to quickly learn, adapt and apply new technologies Strong problem-solving and analytical skills Excellent communication and teamwork skills Experience with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices. Preferred Qualifications: Good to have deep expertise in Biotech & Pharma industries Experience in writing APIs to make the data available to the consumers Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 2 months ago

Apply

6.0 years

3 - 6 Lacs

Noida

On-site

Hello! You've landed on this page, which means you're interested in working with us. Let's take a sneak peek at what it's like to work at Innovaccer. Engineering at Innovaccer With every line of code, we accelerate our customers' success, turning complex challenges into innovative solutions. Collaboratively, we transform each data point we gather into valuable insights for our customers. Join us and be part of a team that's turning dreams of better healthcare into reality, one line of code at a time. Together, we’re shaping the future and making a meaningful impact on the world. Join Us in Transforming Healthcare with the Power of Data & AI At Innovaccer, we’re on a bold mission to revolutionize healthcare by building the most advanced Healthcare Intelligence Platform ever created. Grounded in an AI-first design philosophy, our platform turns complex health data into real-time intelligence, empowering healthcare systems to make faster, smarter decisions. We are building a unified , end-to-end data platform that spans Data Acquisition & Integration, Master Data Management , Data Classification & Governance , Advanced Analytics & AI Studio , App Marketplace , AI-as-BI capabilities, etc. All of this is powered by an Agent-first approach , enabling customers to build solutions dynamically and at scale. You’ll have the opportunity to define and develop platform capabilities that help healthcare organizations tackle some of the industry’s most pressing challenges, such as Kidney disease management, Clinical trials optimization for pharmaceutical companies, Supply chain intelligence for pharmacies, and many more real-world applications. We’re looking for talented engineers and platform thinkers who thrive on solving large-scale, complex, and meaningful problems. If you’re excited about working at the intersection of healthcare, AI, and cutting-edge platform engineering, we’d love to hear from you. Your Role We at Innovaccer are looking for a Software Development Engineer-III (Backend) to build the most amazing product experience. You’ll get to work with other engineers to build a delightful feature experience to understand and solve our customers’ pain points A Day in the Life Building efficient and reusable applications and abstractions. Identify and communicate back-end best practices. Participate in the project life-cycle from pitch/prototyping through definition and design to build, integration, QA, and delivery Analyse and improve the performance, scalability, stability, and security of the product Improve engineering standards, tooling, and processes What You Need 6+ years of experience with a start-up mentality and a high willingness to learn Expert in Python, Go, or Java, and experience with any web framework (relevant to the language) Aggressive problem diagnosis and creative problem-solving skills Expert in Kubernetes and containerization. Experience in RDBMS & NoSQL databases such as Postgres and MongoDB, (any OLAP database is good to have). Experience in Solution Architecture. Experience in cloud service providers such as AWS or Azure. Experience in Kafka, RabbitMQ, or other queuing services is good to have. Working experience in Big Data / Distributed Systems and Async Programming is a must-have. Bachelor's degree in Computer Science/Software Engineering. What We Offer Generous Leave Benefits: Enjoy generous leave benefits of up to 40 days. Parental Leave: Experience one of the industry's best parental leave policies to spend time with your new addition. Sabbatical Leave Policy: Want to focus on skill development, pursue an academic career, or just take a break? We've got you covered. Health Insurance: We offer health benefits and insurance to you and your family for medically related expenses related to illness, disease, or injury. Pet-Friendly Office*: Spend more time with your treasured friends, even when you're away from home. Bring your furry friends with you to the office and let your colleagues become their friends, too. *Noida office only Creche Facility for children*: Say goodbye to worries and hello to a convenient and reliable creche facility that puts your child's well-being first. *India offices Where and how we work Our Noida office is situated in a posh techspace, equipped with various amenities to support our work environment. Here, we follow a five-day work schedule, allowing us to efficiently carry out our tasks and collaborate effectively within our team. Innovaccer is an equal-opportunity employer. We celebrate diversity, and we are committed to fostering an inclusive and diverse workplace where all employees, regardless of race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, marital status, or veteran status, feel valued and empowered. About Innovaccer Innovaccer Inc. is the data platform that accelerates innovation. The Innovaccer platform unifies patient data across systems and care settings and empowers healthcare organizations with scalable, modern applications that improve clinical, financial, operational, and experiential outcomes. Innovaccer’s EHR-agnostic solutions have been deployed across more than 1,600 hospitals and clinics in the US, enabling care delivery transformation for more than 96,000 clinicians, and helping providers work collaboratively with payers and life sciences companies. Innovaccer has helped its customers unify health records for more than 54 million people and generate over $1.5 billion in cumulative cost savings. The Innovaccer platform is the #1 rated Best-in-KLAS data and analytics platform by KLAS, and the #1 rated population health technology platform by Black Book. For more information, please visit innovaccer.com. Check us out on YouTube, Glassdoor, LinkedIn, and innovaccer.com. Disclaimer: Innovaccer does not charge fees or require payment from individuals or agencies for securing employment with us. We do not guarantee job spots or engage in any financial transactions related to employment. If you encounter any posts or requests asking for payment or personal information, we strongly advise you to report them immediately to our HR department at px@innovaccer.com. Additionally, please exercise caution and verify the authenticity of any requests before disclosing personal and confidential information, including bank account details.

Posted 2 months ago

Apply

2.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Who We Are Eide Bailly is one of the top 25 CPA and business advisory firms in the nation. We have over 40 offices in 15 states across the Midwest and western United States and offer our staff and Partners the opportunity to serve a variety of industries. In 2019, we extended our operations to Mumbai, India, and desire to expand our shared services segment there. Founded in 1917, our culture is the foundation of who we are, and we pride ourselves on supporting our employees to help them achieve their goals and pursue their interests both in the office and at home. At Eide Bailly we are passionate about the clients we serve, the work we do, and most importantly, having fun while we do it! Why You'll Love Working Here At Eide Bailly we believe respect is how to treat everyone, not just those you want to impress. Our culture focuses on collaboration to achieve career growth. Innovation is highly encouraged, which is where programs like our EB Xchange originate. This program allows interested tax and audit employees to complete a rotation into a specialty area. We promote happy employees by making work/life balance a priority along with being actively involved in our communities. Our dedication to service can be seen through the Firm's decision to match charitable donations made by employees, as well as providing opportunities to volunteer throughout the year. Most importantly, we like to have fun! We offer a professional and fun work environment with frequent lunch and learns, socials, contests, outings and other events. A Typical Day in the Life A typical day as a Data Engineer Associate might include the following: Builds ETL processes to extract data from multiple sources to build and maintain the data lake. Works with service areas to transform data by auditing and staging data for analytic application. Creates data models including but not limited to fact and dimension tables to support reporting technologies such as OLAP cubes, Tableau, Power BI and Alteryx. Uses problem solving and creativity to apply appropriate techniques in the creation of robust, scalable, and reproducible data processing assets. Leverages source control and pipeline tools such as Github, GitLab, or Git for ADO to maintain documentation and controls of what is delivered to the target data engineering environment providing a full description of how the information is delivered and to what environment and stage of development. Provides guidance and reviews systems for security and governance compliance. Ensures timely and accurate performance on assigned projects. Maintains compliance with project budgets, turnaround times, and deadlines. Monitors platform and application stability and initiates incident and escalations to both internal teams and external vendors. Troubleshooting, Root Cause analysis, implementation, and retrospective activities to overcome break-fix incidents and harden data pipelines to be more robust, stable, and scalable. Who You Are Bachelor’s degree in Information Systems, Computer Science, a related field, or equivalent work experience. Minor in mathematics, statistics, accounting, finance, or other quantitative discipline, preferred. 2+ years work experience as a data engineer, software developer, or equivalent technology profession. Knowledge Of ETL processes and tools. Data orchestration and preparation best practices. MPP systems such as MS Data Factory & Synapse, Snowflake, Delta Lake. MS Fabric preferred. Streaming technologies, such as Microsoft, Kafka, Kinesis, Lambda and Spark. SDLC Lifecycle. Medallion Lakehouse concept data warehousing model (Bronze > Silver > Gold). Experience In At Least One Of The Following MS SQL - SQL Server Integration Services, SQL Server Analysis Services, SQL Server Reporting Services. Python Language. Cloud Data technologies - Azure and AWS. Data modeling using the Kimball method (Star schema). Exposure to DevOps tools and concepts to schedule, build, and release deliverables in a controlled and reproducible environment. Comfortable with uncertainty. Projects and assignments will change rapidly, so must be flexible enough to accommodate changing priorities and timelines. Ability to work independently and motivated to take on assigned tasks without hands-on input. Motivation to learn and apply a complex skillset for data engineering or data science. Strong interpersonal skills and can maintain effective working relationships with staff, partners, public and external agencies. Ability to adapt with the project management lifecycle working with program managers, business analysts and other data professionals. Must possess intellectual and analytical curiosity -- initiative to dig into the "why" of various results and a desire to grow responsibility and become a domain expert and strategic thought leader. What To Expect Next We'll be in touch! If you look like the right fit for our position, one of our recruiters will be reaching out to schedule a phone interview with you to learn more about your career interests and goals. In the meantime, we encourage you to check us out on Facebook, Twitter, Instagram, TikTok or our About Us page. Show more Show less

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies