Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
7.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Title : Data Architect Reports To Tittle : Head of Technology Business Function/Sub Function: IT Location: Noida, India Data Architecture Design: Design, develop, and maintain the enterprise data architecture, including data models, database schemas, and data flow diagrams. Develop a data strategy and roadmap that aligns with business objectives and ensures the scalability of data systems. Architect both transactional (OLTP) and analytical (OLAP) databases, ensuring optimal performance and data consistency. Data Integration & Management: Oversee the integration of disparate data sources into a unified data platform, leveraging ETL/ELT processes and data integration tools. Design and implement data warehousing solutions, data lakes, and/or data marts that enable efficient storage and retrieval of large datasets. Ensure proper data governance, including the definition of data ownership, security, and privacy controls in accordance with compliance standards (GDPR, HIPAA, etc.). Collaboration with Stakeholders: Work closely with business stakeholders, including analysts, developers, and executives, to understand data requirements and ensure that the architecture supports analytics and reporting needs. Collaborate with DevOps and engineering teams to optimize database performance and support large-scale data processing pipelines. Technology Leadership: Guide the selection of data technologies, including databases (SQL/NoSQL), data processing frameworks (Hadoop, Spark), cloud platforms (Azure is a must), and analytics tools. Stay updated on emerging data management technologies, trends, and best practices, and assess their potential application within the organization. Data Quality & Security: Define data quality standards and implement processes to ensure the accuracy, completeness, and consistency of data across all systems. Establish protocols for data security, encryption, and backup/recovery to protect data assets and ensure business continuity. Mentorship & Leadership: Lead and mentor data engineers, data modelers, and other technical staff in best practices for data architecture and management. Provide strategic guidance on data-related projects and initiatives, ensuring that all efforts are aligned with the enterprise data strategy. Required Skills & Experience: Extensive Data Architecture Expertise: Over 7 years of experience in data architecture, data modeling, and database management. Proficiency in designing and implementing relational (SQL) and non-relational (NoSQL) database solutions. Strong experience with data integration tools (Azure Tools are a must + any other third party tools), ETL/ELT processes, and data pipelines. Advanced Knowledge of Data Platforms: Expertise in Azure cloud data platform is a must. Other platforms such as AWS (Redshift, S3), Azure (Data Lake, Synapse), and/or Google Cloud Platform (BigQuery, Dataproc) is a bonus. Experience with big data technologies (Hadoop, Spark) and distributed systems for large-scale data processing. Hands-on experience with data warehousing solutions and BI tools (e.g., Power BI, Tableau, Looker). Data Governance & Compliance: Strong understanding of data governance principles, data lineage, and data stewardship. Knowledge of industry standards and compliance requirements (e.g., GDPR, HIPAA, SOX) and the ability to architect solutions that meet these standards. Technical Leadership: Proven ability to lead data-driven projects, manage stakeholders, and drive data strategies across the enterprise. Strong programming skills in languages such as Python, SQL, R, or Scala. Certification: Azure Certified Solution Architect, Data Engineer, Data Scientist certifications are mandatory. Pre-Sales Responsibilities: Stakeholder Engagement: Work with product stakeholders to analyze functional and non-functional requirements, ensuring alignment with business objectives. Solution Development: Develop end-to-end solutions involving multiple products, ensuring security and performance benchmarks are established, achieved, and maintained. Proof of Concepts (POCs): Develop POCs to demonstrate the feasibility and benefits of proposed solutions. Client Communication: Communicate system requirements and solution architecture to clients and stakeholders, providing technical assistance and guidance throughout the pre-sales process. Technical Presentations: Prepare and deliver technical presentations to prospective clients, demonstrating how proposed solutions meet their needs and requirements. Additional Responsibilities: Stakeholder Collaboration: Engage with stakeholders to understand their requirements and translate them into effective technical solutions. Technology Leadership: Provide technical leadership and guidance to development teams, ensuring the use of best practices and innovative solutions. Integration Management: Oversee the integration of solutions with existing systems and third-party applications, ensuring seamless interoperability and data flow. Performance Optimization: Ensure solutions are optimized for performance, scalability, and security, addressing any technical challenges that arise. Quality Assurance: Establish and enforce quality assurance standards, conducting regular reviews and testing to ensure robustness and reliability. Documentation: Maintain comprehensive documentation of the architecture, design decisions, and technical specifications. Mentoring: Mentor fellow developers and team leads, fostering a collaborative and growth-oriented environment. Qualifications: Education: Bachelor’s or master’s degree in computer science, Information Technology, or a related field. Experience: Minimum of 7 years of experience in data architecture, with a focus on developing scalable and high-performance solutions. Technical Expertise: Proficient in architectural frameworks, cloud computing, database management, and web technologies. Analytical Thinking: Strong problem-solving skills, with the ability to analyze complex requirements and design scalable solutions. Leadership Skills: Demonstrated ability to lead and mentor technical teams, with excellent project management skills. Communication: Excellent verbal and written communication skills, with the ability to convey technical concepts to both technical and non-technical stakeholders. Show more Show less
Posted 2 months ago
7.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Title : Data Architect Reports To Tittle : Head of Technology Business Function/Sub Function: IT Location: Noida, India Data Architecture Design: Design, develop, and maintain the enterprise data architecture, including data models, database schemas, and data flow diagrams. Develop a data strategy and roadmap that aligns with business objectives and ensures the scalability of data systems. Architect both transactional (OLTP) and analytical (OLAP) databases, ensuring optimal performance and data consistency. Data Integration & Management: Oversee the integration of disparate data sources into a unified data platform, leveraging ETL/ELT processes and data integration tools. Design and implement data warehousing solutions, data lakes, and/or data marts that enable efficient storage and retrieval of large datasets. Ensure proper data governance, including the definition of data ownership, security, and privacy controls in accordance with compliance standards (GDPR, HIPAA, etc.). Collaboration with Stakeholders: Work closely with business stakeholders, including analysts, developers, and executives, to understand data requirements and ensure that the architecture supports analytics and reporting needs. Collaborate with DevOps and engineering teams to optimize database performance and support large-scale data processing pipelines. Technology Leadership: Guide the selection of data technologies, including databases (SQL/NoSQL), data processing frameworks (Hadoop, Spark), cloud platforms (Azure is a must), and analytics tools. Stay updated on emerging data management technologies, trends, and best practices, and assess their potential application within the organization. Data Quality & Security: Define data quality standards and implement processes to ensure the accuracy, completeness, and consistency of data across all systems. Establish protocols for data security, encryption, and backup/recovery to protect data assets and ensure business continuity. Mentorship & Leadership: Lead and mentor data engineers, data modelers, and other technical staff in best practices for data architecture and management. Provide strategic guidance on data-related projects and initiatives, ensuring that all efforts are aligned with the enterprise data strategy. Required Skills & Experience: Extensive Data Architecture Expertise: Over 7 years of experience in data architecture, data modeling, and database management. Proficiency in designing and implementing relational (SQL) and non-relational (NoSQL) database solutions. Strong experience with data integration tools (Azure Tools are a must + any other third party tools), ETL/ELT processes, and data pipelines. Advanced Knowledge of Data Platforms: Expertise in Azure cloud data platform is a must. Other platforms such as AWS (Redshift, S3), Azure (Data Lake, Synapse), and/or Google Cloud Platform (BigQuery, Dataproc) is a bonus. Experience with big data technologies (Hadoop, Spark) and distributed systems for large-scale data processing. Hands-on experience with data warehousing solutions and BI tools (e.g., Power BI, Tableau, Looker). Data Governance & Compliance: Strong understanding of data governance principles, data lineage, and data stewardship. Knowledge of industry standards and compliance requirements (e.g., GDPR, HIPAA, SOX) and the ability to architect solutions that meet these standards. Technical Leadership: Proven ability to lead data-driven projects, manage stakeholders, and drive data strategies across the enterprise. Strong programming skills in languages such as Python, SQL, R, or Scala. Certification: Azure Certified Solution Architect, Data Engineer, Data Scientist certifications are mandatory. Pre-Sales Responsibilities: Stakeholder Engagement: Work with product stakeholders to analyze functional and non-functional requirements, ensuring alignment with business objectives. Solution Development: Develop end-to-end solutions involving multiple products, ensuring security and performance benchmarks are established, achieved, and maintained. Proof of Concepts (POCs): Develop POCs to demonstrate the feasibility and benefits of proposed solutions. Client Communication: Communicate system requirements and solution architecture to clients and stakeholders, providing technical assistance and guidance throughout the pre-sales process. Technical Presentations: Prepare and deliver technical presentations to prospective clients, demonstrating how proposed solutions meet their needs and requirements. Additional Responsibilities: Stakeholder Collaboration: Engage with stakeholders to understand their requirements and translate them into effective technical solutions. Technology Leadership: Provide technical leadership and guidance to development teams, ensuring the use of best practices and innovative solutions. Integration Management: Oversee the integration of solutions with existing systems and third-party applications, ensuring seamless interoperability and data flow. Performance Optimization: Ensure solutions are optimized for performance, scalability, and security, addressing any technical challenges that arise. Quality Assurance: Establish and enforce quality assurance standards, conducting regular reviews and testing to ensure robustness and reliability. Documentation: Maintain comprehensive documentation of the architecture, design decisions, and technical specifications. Mentoring: Mentor fellow developers and team leads, fostering a collaborative and growth-oriented environment. Qualifications: Education: Bachelor’s or master’s degree in computer science, Information Technology, or a related field. Experience: Minimum of 7 years of experience in data architecture, with a focus on developing scalable and high-performance solutions. Technical Expertise: Proficient in architectural frameworks, cloud computing, database management, and web technologies. Analytical Thinking: Strong problem-solving skills, with the ability to analyze complex requirements and design scalable solutions. Leadership Skills: Demonstrated ability to lead and mentor technical teams, with excellent project management skills. Communication: Excellent verbal and written communication skills, with the ability to convey technical concepts to both technical and non-technical stakeholders. Show more Show less
Posted 2 months ago
5.0 - 10.0 years
4 - 9 Lacs
Hyderabad
Work from Office
Role & responsibilities Design, develop, and maintain WebFOCUS reports, dashboards, and portals . Collaborate with business users to gather requirements and convert them into scalable BI solutions. Optimize queries, reports, and portal performance to ensure a seamless user experience. Maintain and administer the WebFOCUS environment , including upgrades, patching, and troubleshooting. Integrate WebFOCUS with various data sources such as Oracle, SQL Server, and flat files. Ensure data security , access controls, and compliance standards are implemented in the BI solutions. Document system design, technical specifications, and user manuals. Participate in UAT, performance testing, and deployment processes. Work closely with data engineers, analysts, and business stakeholders to ensure alignment of deliverables. Preferred candidate profile Bachelors degree in Computer Science, Information Systems, or related field. Minimum of 5 years of hands-on experience with WebFOCUS (8.x or higher) . Proficiency in InfoAssist, App Studio, ReportCaster , and WebFOCUS Designer . Strong SQL skills and understanding of data warehousing concepts . Experience working with OLAP, ETL pipelines, and data models . Solid understanding of HTML, JavaScript, and CSS for custom report development. Familiarity with version control systems (e.g., Git) and DevOps practices . Excellent analytical, problem-solving, and communication skills. Preferred Qualifications: Knowledge of REST APIs and WebFOCUS API integrations. Experience with cloud platforms like AWS, Azure, or GCP. Exposure to Agile/Scrum methodologies and collaborative tools like JIRA or Confluence. Certification in Information Builders WebFOCUS (preferred but not mandatory).
Posted 2 months ago
40.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Sr. Data Engineer Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. What You Will Do Let’s do this. Let’s change the world. We are looking for highly motivated expert Senior Data Engineer who can own the design & development of complex data pipelines, solutions and frameworks. The ideal candidate will be responsible to design, develop, and optimize data pipelines, data integration frameworks, and metadata-driven architectures that enable seamless data access and analytics. This role prefers deep expertise in big data processing, distributed computing, data modeling, and governance frameworks to support self-service analytics, AI-driven insights, and enterprise-wide data management. Roles & Responsibilities: Design, develop, and maintain scalable ETL/ELT pipelines to support structured, semi-structured, and unstructured data processing across the Enterprise Data Fabric. Implement real-time and batch data processing solutions, integrating data from multiple sources into a unified, governed data fabric architecture. Optimize big data processing frameworks using Apache Spark, Hadoop, or similar distributed computing technologies to ensure high availability and cost efficiency. Work with metadata management and data lineage tracking tools to enable enterprise-wide data discovery and governance. Ensure data security, compliance, and role-based access control (RBAC) across data environments. Optimize query performance, indexing strategies, partitioning, and caching for large-scale data sets. Develop CI/CD pipelines for automated data pipeline deployments, version control, and monitoring. Implement data virtualization techniques to provide seamless access to data across multiple storage systems. Collaborate with cross-functional teams, including data architects, business analysts, and DevOps teams, to align data engineering strategies with enterprise goals. Stay up to date with emerging data technologies and best practices, ensuring continuous improvement of Enterprise Data Fabric architectures. What We Expect Of You We are all different, yet we all use our unique contributions to serve patients. The [vital attribute] professional we seek is a [type of person] with these qualifications. Basic Qualifications: Master’s degree and 3 to 4 + years of Computer Science, IT or related field experience OR Bachelor’s degree and 5 to 8 + years of Computer Science, IT or related field experience OR Diploma and 10 to 12 years of Computer Science, IT or related field experience Must-Have Skills: Hands-on experience in data engineering technologies such as Databricks, PySpark, SparkSQL Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies. Proficiency in workflow orchestration, performance tuning on big data processing. Strong understanding of AWS services Experience with Data Fabric, Data Mesh, or similar enterprise-wide data architectures. Ability to quickly learn, adapt and apply new technologies Strong problem-solving and analytical skills Excellent communication and teamwork skills Experience with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices. Preferred Qualifications: Good to have deep expertise in Biotech & Pharma industries Experience in writing APIs to make the data available to the consumers Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. What You Can Expect Of Us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. Show more Show less
Posted 2 months ago
15.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Description Skills- .net Coe, Azure, Angular, Enterprise Architect year of Exp- 15+ years Location- Pune, Noida, Nagpur, Bangalore Requirements A proven track record of successful project/product implementation with a minimum of 5+ years of Continuous Integration, Continuous Delivery, Pair programming and Test Driven Development. Proficiency in one or more backend languages in the given order of preference(.NetCore / Python / Java / Spring / GoLang). Should have developed applications and services in the backend for 4+ years. Proficiency in one or more frontend frameworks or libraries (HTML5 / Angular / React / Vue). Should have developed frontend applications for 3+ years. 10+ years of experience developing on and architecting for both mobile and web platforms. The number of years of experience does not matter if the candidate can demonstrate the skills. Has hands-on experience architecting for and deploying to cloud – with at least one provider in the same order of preference- Azure, AWS or GCP. Should be hands-on with DevOps and DevSecOps practices in on-prem as well as cloud deployments. Has Hands-on Test Driven Development experience and able to author Unit, Integration, Contract and Functional Tests. Good OO-skills. Must demonstrate strong familiarity with design patterns and reactive programming. Ownership of technical designs, code development, and component test execution to demonstrate alignment to the functional specification. Using configuration management and integration/build automation tools to lead and deploy JS/Java/.Net/Python or any other code in any language or technology using CI/CD tools like Jenkins/TeamCity/Go.CD/Octopus etc. Applying knowledge of common, relevant architecture frameworks in defining and evaluating application architectures. Performing code reviews and providing critical suggestions for fixes and improvements Supporting issue analysis and fix activities during test phases, as well as production issue resolution. Fixing and performance tuning applications and services on-prem and/or on-cloud where the cloud provider can be any from AWS, Google or Azure or it can be a cloud-agnostic container-based deployment. Developing and demonstrating a broad set of technology skills across multi-platform technologies, microservice design patterns, Open Source libraries and frameworks, and technology architecture concepts as well as RDBMS database, OLAP, Data warehouses and/or NoSQL databases as SaaS or managed services. Collaborating within a project team of talented employees with diverse and complementary skills. Experience practising pair programming in a team is an advantage Good communication and client-facing skills. Job responsibilities A proven track record of successful project/product implementation with a minimum of 5+ years of Continuous Integration, Continuous Delivery, Pair programming and Test Driven Development. Proficiency in one or more backend languages in the given order of preference(.NetCore / Python / Java / Spring / GoLang). Should have developed applications and services in the backend for 4+ years. Proficiency in one or more frontend frameworks or libraries (HTML5 / Angular / React / Vue). Should have developed frontend applications for 3+ years. 10+ years of experience developing on and architecting for both mobile and web platforms. The number of years of experience does not matter if the candidate can demonstrate the skills. Has hands-on experience architecting for and deploying to cloud – with at least one provider in the same order of preference- Azure, AWS or GCP. Should be hands-on with DevOps and DevSecOps practices in on-prem as well as cloud deployments. Has Hands-on Test Driven Development experience and able to author Unit, Integration, Contract and Functional Tests. Good OO-skills. Must demonstrate strong familiarity with design patterns and reactive programming. Ownership of technical designs, code development, and component test execution to demonstrate alignment to the functional specification. Using configuration management and integration/build automation tools to lead and deploy JS/Java/.Net/Python or any other code in any language or technology using CI/CD tools like Jenkins/TeamCity/Go.CD/Octopus etc. Applying knowledge of common, relevant architecture frameworks in defining and evaluating application architectures. Performing code reviews and providing critical suggestions for fixes and improvements Supporting issue analysis and fix activities during test phases, as well as production issue resolution. Fixing and performance tuning applications and services on-prem and/or on-cloud where the cloud provider can be any from AWS, Google or Azure or it can be a cloud-agnostic container-based deployment. Developing and demonstrating a broad set of technology skills across multi-platform technologies, microservice design patterns, Open Source libraries and frameworks, and technology architecture concepts as well as RDBMS database, OLAP, Data warehouses and/or NoSQL databases as SaaS or managed services. Collaborating within a project team of talented employees with diverse and complementary skills. Experience practising pair programming in a team is an advantage Good communication and client-facing skills. What we offer Culture of caring. At GlobalLogic, we prioritize a culture of caring. Across every region and department, at every level, we consistently put people first. From day one, you’ll experience an inclusive culture of acceptance and belonging, where you’ll have the chance to build meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. Learning and development. We are committed to your continuous learning and development. You’ll learn and grow daily in an environment with many opportunities to try new things, sharpen your skills, and advance your career at GlobalLogic. With our Career Navigator tool as just one example, GlobalLogic offers a rich array of programs, training curricula, and hands-on opportunities to grow personally and professionally. Interesting & meaningful work. GlobalLogic is known for engineering impact for and with clients around the world. As part of our team, you’ll have the chance to work on projects that matter. Each is a unique opportunity to engage your curiosity and creative problem-solving skills as you help clients reimagine what’s possible and bring new solutions to market. In the process, you’ll have the privilege of working on some of the most cutting-edge and impactful solutions shaping the world today. Balance and flexibility. We believe in the importance of balance and flexibility. With many functional career areas, roles, and work arrangements, you can explore ways of achieving the perfect balance between your work and life. Your life extends beyond the office, and we always do our best to help you integrate and balance the best of work and life, having fun along the way! High-trust organization. We are a high-trust organization where integrity is key. By joining GlobalLogic, you’re placing your trust in a safe, reliable, and ethical global company. Integrity and trust are a cornerstone of our value proposition to our employees and clients. You will find truthfulness, candor, and integrity in everything we do. About GlobalLogic GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner to the world’s largest and most forward-thinking companies. Since 2000, we’ve been at the forefront of the digital revolution – helping create some of the most innovative and widely used digital products and experiences. Today we continue to collaborate with clients in transforming businesses and redefining industries through intelligent products, platforms, and services. Show more Show less
Posted 2 months ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
As a global leader in cybersecurity, CrowdStrike protects the people, processes and technologies that drive modern organizations. Since 2011, our mission hasn’t changed — we’re here to stop breaches, and we’ve redefined modern security with the world’s most advanced AI-native platform. We work on large scale distributed systems, processing almost 3 trillion events per day. We have 3.44 PB of RAM deployed across our fleet of C* servers - and this traffic is growing daily. Our customers span all industries, and they count on CrowdStrike to keep their businesses running, their communities safe and their lives moving forward. We’re also a mission-driven company. We cultivate a culture that gives every CrowdStriker both the flexibility and autonomy to own their careers. We’re always looking to add talented CrowdStrikers to the team who have limitless passion, a relentless focus on innovation and a fanatical commitment to our customers, our community and each other. Ready to join a mission that matters? The future of cybersecurity starts with you. About The Role The charter of the Data + ML Platform team is to harness all the data that is ingested and cataloged within the Data LakeHouse for exploration, insights, model development, ML Engineering and Insights Activation. This team is situated within the larger Data Platform group, which serves as one of the core pillars of our company. We process data at a truly immense scale. Our processing is composed of various facets including threat events collected via telemetry data, associated metadata, along with IT asset information, contextual information about threat exposure based on additional processing, etc. These facets comprise the overall data platform, which is currently over 200 PB and maintained in a hyper scale Data Lakehouse, built and owned by the Data Platform team. The ingestion mechanisms include both batch and near real-time streams that form the core Threat Analytics Platform used for insights, threat hunting, incident investigations and more. As an engineer in this team, you will play an integral role as we build out our ML Experimentation Platform from the ground up. You will collaborate closely with Data Platform Software Engineers, Data Scientists & Threat Analysts to design, implement, and maintain scalable ML pipelines that will be used for Data Preparation, Cataloging, Feature Engineering, Model Training, and Model Serving that influence critical business decisions. You’ll be a key contributor in a production-focused culture that bridges the gap between model development and operational success. Future plans include generative AI investments for use cases such as modeling attack paths for IT assets. What You’ll Do Help design, build, and facilitate adoption of a modern Data+ML platform Modularize complex ML code into standardized and repeatable components Establish and facilitate adoption of repeatable patterns for model development, deployment, and monitoring Build a platform that scales to thousands of users and offers self-service capability to build ML experimentation pipelines Leverage workflow orchestration tools to deploy efficient and scalable execution of complex data and ML pipelines Review code changes from data scientists and champion software development best practices Leverage cloud services like Kubernetes, blob storage, and queues in our cloud first environment What You’ll Need B.S. in Computer Science, Data Science, Statistics, Applied Mathematics, or a related field and 7 + years related experience; or M.S. with 5+ years of experience; or Ph.D with 6+ years of experience. 3+ years experience developing and deploying machine learning solutions to production. Familiarity with typical machine learning algorithms from an engineering perspective (how they are built and used, not necessarily the theory); familiarity with supervised / unsupervised approaches: how, why, and when and labeled data is created and used 3+ years experience with ML Platform tools like Jupyter Notebooks, NVidia Workbench, MLFlow, Ray, Vertex AI etc. Experience building data platform product(s) or features with (one of) Apache Spark, Flink or comparable tools in GCP. Experience with Iceberg is highly desirable. Proficiency in distributed computing and orchestration technologies (Kubernetes, Airflow, etc.) Production experience with infrastructure-as-code tools such as Terraform, FluxCD Expert level experience with Python; Java/Scala exposure is recommended. Ability to write Python interfaces to provide standardized and simplified interfaces for data scientists to utilize internal Crowdstrike tools Expert level experience with CI/CD frameworks such as GitHub Actions Expert level experience with containerization frameworks Strong analytical and problem solving skills, capable of working in a dynamic environment Exceptional interpersonal and communication skills. Work with stakeholders across multiple teams and synthesize their needs into software interfaces and processes. Experience With The Following Is Desirable Go Iceberg Pinot or other time-series/OLAP-style database Jenkins Parquet Protocol Buffers/GRPC VJ1 Benefits Of Working At CrowdStrike Remote-friendly and flexible work culture Market leader in compensation and equity awards Comprehensive physical and mental wellness programs Competitive vacation and holidays for recharge Paid parental and adoption leaves Professional development opportunities for all employees regardless of level or role Employee Resource Groups, geographic neighbourhood groups and volunteer opportunities to build connections Vibrant office culture with world class amenities Great Place to Work Certified™ across the globe CrowdStrike is proud to be an equal opportunity employer. We are committed to fostering a culture of belonging where everyone is valued for who they are and empowered to succeed. We support veterans and individuals with disabilities through our affirmative action program. CrowdStrike is committed to providing equal employment opportunity for all employees and applicants for employment. The Company does not discriminate in employment opportunities or practices on the basis of race, color, creed, ethnicity, religion, sex (including pregnancy or pregnancy-related medical conditions), sexual orientation, gender identity, marital or family status, veteran status, age, national origin, ancestry, physical disability (including HIV and AIDS), mental disability, medical condition, genetic information, membership or activity in a local human rights commission, status with regard to public assistance, or any other characteristic protected by law. We base all employment decisions--including recruitment, selection, training, compensation, benefits, discipline, promotions, transfers, lay-offs, return from lay-off, terminations and social/recreational programs--on valid job requirements. If you need assistance accessing or reviewing the information on this website or need help submitting an application for employment or requesting an accommodation, please contact us at recruiting@crowdstrike.com for further assistance. Show more Show less
Posted 2 months ago
7.0 years
0 Lacs
Kerala, India
Remote
As a global leader in cybersecurity, CrowdStrike protects the people, processes and technologies that drive modern organizations. Since 2011, our mission hasn’t changed — we’re here to stop breaches, and we’ve redefined modern security with the world’s most advanced AI-native platform. We work on large scale distributed systems, processing almost 3 trillion events per day. We have 3.44 PB of RAM deployed across our fleet of C* servers - and this traffic is growing daily. Our customers span all industries, and they count on CrowdStrike to keep their businesses running, their communities safe and their lives moving forward. We’re also a mission-driven company. We cultivate a culture that gives every CrowdStriker both the flexibility and autonomy to own their careers. We’re always looking to add talented CrowdStrikers to the team who have limitless passion, a relentless focus on innovation and a fanatical commitment to our customers, our community and each other. Ready to join a mission that matters? The future of cybersecurity starts with you. About The Role The charter of the Data + ML Platform team is to harness all the data that is ingested and cataloged within the Data LakeHouse for exploration, insights, model development, ML Engineering and Insights Activation. This team is situated within the larger Data Platform group, which serves as one of the core pillars of our company. We process data at a truly immense scale. Our processing is composed of various facets including threat events collected via telemetry data, associated metadata, along with IT asset information, contextual information about threat exposure based on additional processing, etc. These facets comprise the overall data platform, which is currently over 200 PB and maintained in a hyper scale Data Lakehouse, built and owned by the Data Platform team. The ingestion mechanisms include both batch and near real-time streams that form the core Threat Analytics Platform used for insights, threat hunting, incident investigations and more. As an engineer in this team, you will play an integral role as we build out our ML Experimentation Platform from the ground up. You will collaborate closely with Data Platform Software Engineers, Data Scientists & Threat Analysts to design, implement, and maintain scalable ML pipelines that will be used for Data Preparation, Cataloging, Feature Engineering, Model Training, and Model Serving that influence critical business decisions. You’ll be a key contributor in a production-focused culture that bridges the gap between model development and operational success. Future plans include generative AI investments for use cases such as modeling attack paths for IT assets. What You’ll Do Help design, build, and facilitate adoption of a modern Data+ML platform Modularize complex ML code into standardized and repeatable components Establish and facilitate adoption of repeatable patterns for model development, deployment, and monitoring Build a platform that scales to thousands of users and offers self-service capability to build ML experimentation pipelines Leverage workflow orchestration tools to deploy efficient and scalable execution of complex data and ML pipelines Review code changes from data scientists and champion software development best practices Leverage cloud services like Kubernetes, blob storage, and queues in our cloud first environment What You’ll Need B.S. in Computer Science, Data Science, Statistics, Applied Mathematics, or a related field and 7 + years related experience; or M.S. with 5+ years of experience; or Ph.D with 6+ years of experience. 3+ years experience developing and deploying machine learning solutions to production. Familiarity with typical machine learning algorithms from an engineering perspective (how they are built and used, not necessarily the theory); familiarity with supervised / unsupervised approaches: how, why, and when and labeled data is created and used 3+ years experience with ML Platform tools like Jupyter Notebooks, NVidia Workbench, MLFlow, Ray, Vertex AI etc. Experience building data platform product(s) or features with (one of) Apache Spark, Flink or comparable tools in GCP. Experience with Iceberg is highly desirable. Proficiency in distributed computing and orchestration technologies (Kubernetes, Airflow, etc.) Production experience with infrastructure-as-code tools such as Terraform, FluxCD Expert level experience with Python; Java/Scala exposure is recommended. Ability to write Python interfaces to provide standardized and simplified interfaces for data scientists to utilize internal Crowdstrike tools Expert level experience with CI/CD frameworks such as GitHub Actions Expert level experience with containerization frameworks Strong analytical and problem solving skills, capable of working in a dynamic environment Exceptional interpersonal and communication skills. Work with stakeholders across multiple teams and synthesize their needs into software interfaces and processes. Experience With The Following Is Desirable Go Iceberg Pinot or other time-series/OLAP-style database Jenkins Parquet Protocol Buffers/GRPC VJ1 Benefits Of Working At CrowdStrike Remote-friendly and flexible work culture Market leader in compensation and equity awards Comprehensive physical and mental wellness programs Competitive vacation and holidays for recharge Paid parental and adoption leaves Professional development opportunities for all employees regardless of level or role Employee Resource Groups, geographic neighbourhood groups and volunteer opportunities to build connections Vibrant office culture with world class amenities Great Place to Work Certified™ across the globe CrowdStrike is proud to be an equal opportunity employer. We are committed to fostering a culture of belonging where everyone is valued for who they are and empowered to succeed. We support veterans and individuals with disabilities through our affirmative action program. CrowdStrike is committed to providing equal employment opportunity for all employees and applicants for employment. The Company does not discriminate in employment opportunities or practices on the basis of race, color, creed, ethnicity, religion, sex (including pregnancy or pregnancy-related medical conditions), sexual orientation, gender identity, marital or family status, veteran status, age, national origin, ancestry, physical disability (including HIV and AIDS), mental disability, medical condition, genetic information, membership or activity in a local human rights commission, status with regard to public assistance, or any other characteristic protected by law. We base all employment decisions--including recruitment, selection, training, compensation, benefits, discipline, promotions, transfers, lay-offs, return from lay-off, terminations and social/recreational programs--on valid job requirements. If you need assistance accessing or reviewing the information on this website or need help submitting an application for employment or requesting an accommodation, please contact us at recruiting@crowdstrike.com for further assistance. Show more Show less
Posted 2 months ago
4.0 - 10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Role: ETL Test Engineer Experience range: 4-10 years Location: Hyderabad ONLY Job description: 1.Min 4 to 6 yrs of Exp in ETL Testing. 2.SQL - Expert level of knowledge in core concepts of SQL and query. 3. ETL Automation - Experience in Datagap, Good to have experience in tools like Informatica, Talend and Ab initio. 4. Experience in query optimization, stored procedures/views and functions. 5.Strong familiarity of data warehouse projects and data modeling. 6. Understanding of BI concepts - OLAP vs OLTP and deploying the applications on cloud servers. 7.Preferably good understanding of Design, Development, and enhancement of SQL server DW using tools (SSIS,SSMS, PowerBI/Cognos/Informatica, etc.) 8. Azure DevOps/JIRA - Hands on experience on any test management tools preferably ADO or JIRA. 9. Agile concepts - Good experience in understanding agile methodology (scrum, lean etc.) 10.Communication - Good communication skills to understand and collaborate with all the stake holders within the project Show more Show less
Posted 2 months ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
The Opportunity We are seeking a highly skilled and experienced Senior Data Engineer to join our dynamic data team in Gurgaon. The ideal candidate will have a strong background in designing, building, and maintaining robust, scalable, and efficient data pipelines and data warehousing solutions. You will play a crucial role in transforming raw data into actionable insights, enabling data-driven decision-making across the Responsibilities : Data Pipeline Development : Design, develop, construct, test, and maintain highly scalable data pipelines using various ETL/ELT tools and programming languages (e.g., Python, Scala, Java). Data Warehousing : Build and optimize data warehouse solutions (e.g., Snowflake, Redshift, BigQuery, Databricks) to support reporting, analytics, and machine learning initiatives. Data Modeling : Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements and design optimal data models (dimensional, relational, etc.). Performance Optimization : Identify and implement solutions for data quality issues, data pipeline performance bottlenecks, and data governance challenges. Cloud Technologies : Work extensively with cloud-based data platforms (AWS, Azure, GCP) and their respective data services (e.g., S3, EC2, Lambda, Glue, Data Factory, Azure Synapse, GCS, Dataflow, BigQuery). Automation & Monitoring : Implement automation for data pipeline orchestration, monitoring, and alerting to ensure data reliability and availability. Mentorship : Mentor junior data engineers, provide technical guidance, and contribute to best practices and architectural decisions within the data team. Collaboration : Work closely with cross-functional teams, including product, engineering, and business intelligence, to deliver data solutions that meet business needs. Documentation : Create and maintain comprehensive documentation for data pipelines, data models, and data Qualifications : Bachelor's or Master's degree in Computer Science, Engineering, Information Technology, or a related quantitative field. 5+ years of professional experience in data engineering, with a strong focus on building and optimizing data pipelines and data warehousing solutions. Proficiency in at least one programming language commonly used in data engineering (e.g., Python, Scala, Java). Python is highly preferred. Extensive experience with SQL and relational databases. Demonstrated experience with cloud data platforms (AWS, Azure, or GCP) and their relevant data services. Strong understanding of data warehousing concepts (e.g., Kimball methodology, OLAP, OLTP) and experience with data modeling techniques. Experience with big data technologies (e.g., Apache Spark, Hadoop, Kafka). Familiarity with version control systems (e.g., Skills : Experience with specific data warehousing solutions like Snowflake, Redshift, or Google BigQuery. Knowledge of containerization technologies (Docker, Kubernetes). Experience with CI/CD pipelines for data solutions. Familiarity with data visualization tools (e.g., Tableau, Power BI, Looker). Understanding of machine learning concepts and how data engineering supports ML workflows. Excellent problem-solving, analytical, and communication skills. Ability to work independently and as part of a collaborative team in a fast-paced environment (ref:hirist.tech) Show more Show less
Posted 2 months ago
4.0 - 8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Role : Data Engineer Location : Bengaluru, Karnataka, India. Type : Contract/ Freelance. About The Role We're looking for an experienced Data Engineer on Contract (4-8 years) to join our data team. You'll be key in building and maintaining our data systems on AWS. You'll use your strong skills in big data tools and cloud technology to help our analytics team get valuable insights from our data. You'll be in charge of the whole process of our data pipelines, making sure the data is good, reliable, and fast. What You'll Do Design and build efficient data pipelines using Spark / PySpark / Scala. Manage complex data processes with Airflow, creating and fixing any issues with the workflows (DAGs). Clean, transform, and prepare data for analysis. Use Python for data tasks, automation, and building tools. Work with AWS services like S3, Redshift, EMR, Glue, and Athena to manage our data infrastructure. Collaborate closely with the Analytics team to understand what data they need and provide solutions. Help develop and maintain our Node.js backend, using Typescript, for data services. Use YAML to manage the settings for our data tools. Set up and manage automated deployment processes (CI/CD) using GitHub Actions. Monitor and fix problems in our data pipelines to keep them running smoothly. Implement checks to ensure our data is accurate and consistent. Help design and build data warehouses and data lakes. Use SQL extensively to query and work with data in different systems. Work with streaming data using technologies like Kafka for real-time data processing. Stay updated on the latest data engineering technologies. Guide and mentor junior data engineers. Help create data management rules and procedures. What You'll Need Bachelor's or Master's degree in Computer Science, Engineering, or a related field. 4-8 years of experience as a Data Engineer. Strong skills in Spark and Scala for handling large amounts of data. Good experience with Airflow for managing data workflows and understanding DAGs. Solid understanding of how to transform and prepare data. Strong programming skills in Python for data tasks and automation. Proven experience working with AWS cloud services (S3, Redshift, EMR, Glue, IAM, EC2, and Athena). Experience building data solutions for Analytics teams. Familiarity with Node.js for backend development. Experience with Typescript for backend development is a plus. Experience using YAML for configuration management. Hands-on experience with GitHub Actions for automated deployment (CI/CD). Good understanding of data warehousing concepts. Strong database skills OLAP/OLTP. Excellent command of SQL for data querying and manipulation. Experience with stream processing using Kafka or similar technologies. Excellent problem-solving, analytical, and communication skills. Ability to work well independently and as part of a team. (ref:hirist.tech) Show more Show less
Posted 2 months ago
7.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Experience : 7+ Years. Location : Noida. Key Responsibilities :. Data Architecture Design :. Design, develop, and maintain the enterprise data architecture, including data models, database schemas, and data flow diagrams. Develop a data strategy and roadmap that aligns with the business objectives and ensures the scalability of data systems. Architect both transactional (OLTP) and analytical (OLAP) databases, ensuring optimal performance and data consistency. Data Integration & Management :. Oversee the integration of disparate data sources into a unified data platform, leveraging ETL/ELT processes and data integration tools. Design and implement data warehousing solutions, data lakes, and/or data marts that enable efficient storage and retrieval of large datasets. Ensure proper data governance, including the definition of data ownership, security, and privacy controls in accordance with compliance standards (GDPR, HIPAA, etc. Collaboration with Stakeholders :. Work closely with business stakeholders, including analysts, developers, and executives, to understand data requirements and ensure that the architecture supports analytics and reporting needs. Collaborate with DevOps and engineering teams to optimize database performance and support large-scale data processing pipelines. Technology Leadership :. Guide the selection of data technologies, including databases (SQL/NoSQL), data processing frameworks (Hadoop, Spark), cloud platforms (Azure is a must), and analytics tools. Stay updated on emerging data management technologies, trends, and best practices, and assess their potential application within the organization. Data Quality & Security :. Define data quality standards and implement processes to ensure the accuracy, completeness, and consistency of data across all systems. Establish protocols for data security, encryption, and backup/recovery to protect data assets and ensure business continuity. Mentorship & Leadership :. Lead and mentor data engineers, data modelers, and other technical staff in best practices for data architecture and management. Provide strategic guidance on data-related projects and initiatives, ensuring that all efforts are aligned with the enterprise data strategy. Required Skills & Experience :. Extensive Data Architecture Expertise :. Over 7 years of experience in data architecture, data modeling, and database management. Proficiency in designing and implementing relational (SQL) and non-relational (NoSQL) database solutions. Strong experience with data integration tools (Azure Tools are a must + any other third party tools), ETL/ELT processes, and data pipelines. Advanced Knowledge of Data Platforms :. Expertise in Azure cloud data platform is a must. Other platforms such as AWS (Redshift, S3), Azure (Data Lake, Synapse), and/or Google Cloud Platform (BigQuery, Dataproc) is a bonus. Experience with big data technologies (Hadoop, Spark) and distributed systems for large-scale data processing. Hands-on experience with data warehousing solutions and BI tools (e.g, Power BI, Tableau, Looker). Data Governance & Compliance :. Strong understanding of data governance principles, data lineage, and data stewardship. Knowledge of industry standards and compliance requirements (e.g, GDPR, HIPAA, SOX) and the ability to architect solutions that meet these standards. Technical Leadership :. Proven ability to lead data-driven projects, manage stakeholders, and drive data strategies across the enterprise. Strong programming skills in languages such as Python, SQL, R, or Scala. Certification :. Azure Certified Solution Architect, Data Engineer, Data Scientist certifications are mandatory. Pre-Sales Responsibilities :. Stakeholder Engagement : Work with product stakeholders to analyze functional and non-functional requirements, ensuring alignment with business objectives. Solution Development : Develop end-to-end solutions involving multiple products, ensuring security and performance benchmarks are established, achieved, and maintained. Proof of Concepts (POCs) : Develop POCs to demonstrate the feasibility and benefits of proposed solutions. Client Communication : Communicate system requirements and solution architecture to clients and stakeholders, providing technical assistance and guidance throughout the pre-sales process. Technical Presentations : Prepare and deliver technical presentations to prospective clients, demonstrating how proposed solutions meet their needs and requirements. Additional Responsibilities :. Stakeholder Collaboration : Engage with stakeholders to understand their requirements and translate them into effective technical solutions. Technology Leadership : Provide technical leadership and guidance to development teams, ensuring the use of best practices and innovative solutions. Integration Management : Oversee the integration of solutions with existing systems and third-party applications, ensuring seamless interoperability and data flow. Performance Optimization : Ensure solutions are optimized for performance, scalability, and security, addressing any technical challenges that arise. Quality Assurance : Establish and enforce quality assurance standards, conducting regular reviews and testing to ensure robustness and reliability. Documentation : Maintain comprehensive documentation of the architecture, design decisions, and technical specifications. Mentoring : Mentor fellow developers and team leads, fostering a collaborative and growth-oriented environment. Qualifications Education : Bachelor's or master's degree in computer science, Information Technology, or a related field. Experience : Minimum of 7 years of experience in data architecture, with a focus on developing scalable and high-performance solutions. Technical Expertise : Proficient in architectural frameworks, cloud computing, database management, and web technologies. Analytical Thinking : Strong problem-solving skills, with the ability to analyze complex requirements and design scalable solutions. Leadership Skills : Demonstrated ability to lead and mentor technical teams, with excellent project management skills. Communication : Excellent verbal and written communication skills, with the ability to convey technical concepts to both technical and non-technical stakeholders. (ref:hirist.tech) Show more Show less
Posted 2 months ago
40.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Amgen Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. About The Role Role Description: Let’s do this. Let’s change the world. We are seeking a skilled Data Engineer to join our Enterprise Data RunOps Team. This role will focus on the development, support, and optimization of data pipelines and operational workflows that power our enterprise data teams —enabling seamless data access, integration, and governance across the organization. The ideal candidate will be a hands-on engineer with a deep understanding of modern data architectures, strong experience in cloud-native technologies, and a passion for delivering reliable, well-governed, and high-performing data infrastructure in a regulated biotech environment. Roles & Responsibilities: Design, build, and support data ingestion, transformation, and delivery pipelines across structured and unstructured sources within the enterprise data engineering. Manage and monitor day-to-day operations of the data engineering environment, ensuring high availability, performance, and data integrity. Collaborate with data architects, data governance, platform engineering, and business teams to support data integration use cases across R&D, Clinical, Regulatory, and Commercial functions. Integrate data from laboratory systems, clinical platforms, regulatory systems, and third-party data sources into enterprise data repositories. Implement and maintain metadata capture, data lineage, and data quality checks across pipelines to meet governance and compliance requirements. Support real-time and batch data flows using technologies such as Databricks, Kafka, Delta Lake, or similar. Work within GxP-aligned environments, ensuring compliance with data privacy, audit, and quality control standards. Partner with data stewards and business analysts to support self-service data access, reporting, and analytics enablement. Maintain operational documentation, runbooks, and process automation scripts for continuous improvement of data fabric operations. Participate in incident resolution and root cause analysis, ensuring timely and effective remediation of data pipeline issues. Create documentation, playbooks, and best practices for metadata ingestion, data lineage, and catalog usage. Work in an Agile and Scaled Agile (SAFe) environment, collaborating with cross-functional teams, product owners, and Scrum Masters to deliver incremental value Use JIRA, Confluence, and Agile DevOps tools to manage sprints, backlogs, and user stories. Support continuous improvement, test automation, and DevOps practices in the data engineering lifecycle Collaborate and communicate effectively with the product teams, with cross-functional teams to understand business requirements and translate them into technical solutions Must-Have Skills: Build and maintain data pipelines to ingest and update metadata into enterprise data catalog platforms in biotech or life sciences or pharma. Hands-on experience in data engineering technologies such as Databricks, PySpark, SparkSQL Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies. Proficiency in workflow orchestration, performance tuning on big data processing. experience in data engineering, data operations, or related roles, with at least 2+ years in life sciences, biotech, or pharmaceutical environments. Experience with cloud platforms (e.g., AWS, Azure, or GCP) for data pipeline and storage solutions. Understanding of data governance frameworks, metadata management, and data lineage tracking. Strong problem-solving skills, attention to detail, and ability to manage multiple priorities in a dynamic environment. Effective communication and collaboration skills to work across technical and business stakeholders. Strong problem-solving and analytical skills Excellent communication and teamwork skills Experience with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices. Good-to-Have Skills: Data Engineering experience in Biotechnology or pharma industry Experience in writing APIs to make the data available to the consumers Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Education and Professional Certifications Master’s degree and 3 to 4 + years of Computer Science, IT or related field experience Bachelor’s degree and 5 to 8 + years of Computer Science, IT or related field experience AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation. Show more Show less
Posted 2 months ago
8.0 - 13.0 years
10 - 15 Lacs
Bengaluru
Work from Office
What youll do DocuSign is seeking a talented and results oriented Data Engineer to focus on delivering trusted data to the business. As a member of the Global Data Analytics (GDA) T eam, the Data Engineer leverages a variety of technologies to design, develop and deliver new features in addition to loading, transforming and preparing data sets of all shapes and sizes for teams around the world. During a typical day, the Engineer will spend time developing new features to analyze data, develop solutions and load tested data sets into the Snowflake Enterprise Data Warehouse. The ideal candidate will demonstrate a positive can do attitude, a passion for learning and growing, and the drive to work hard and get the job done in a timely fashion. This individual contributor position provides plenty of room to grow -- a mix of challenging assignments, a chance to work with a world-class team, and the opportunity to use innovative technologies such as AWS, Snowflake, dbt, Airflow and Matillion. This position is an individual contributor role reporting to the Manager, Data Engineering. Responsibility Design, develop and maintain scalable and efficient data pipelines Analyze and Develop data quality and validation procedures Work with stakeholders to understand the data requirements and provide solutions Troubleshoot and resolve data issues in a timely manner Learn and leverage available AI tools for increased developer productivity Collaborate with cross-functional teams to ingest data from various sources Evaluate and improve data architecture and processes continuously Own, monitor, and improve solutions to ensure SLAs are met Develop and maintain documentation for Data infrastructure and processes Executes projects using Agile Scrum methodologies and be a team player Job Designation Hybrid: Employee divides their time between in-office and remote work. Access to an office location is required. (Frequency: Minimum 2 days per week; may vary by team but will be weekly in-office expectation) Positions at Docusign are assigned a job designation of either In Office, Hybrid or Remote and are specific to the role/job. Preferred job designations are not guaranteed when changing positions within Docusign. Docusign reserves the right to change a positions job designation depending on business needs and as permitted by local law. What you bring Basic Bachelor s Degree in Computer Science, Data Analytics, Information Systems, etc Experience developing data pipelines in one of the following languages: Python or Java 8+ years dimensional and relational data modeling experience Excellent SQL and database management skills Preferred 8+ years in data warehouse engineering (OLAP) Snowflake, BigQuery, Teradata 8+ years with transactional databases (OLTP) Oracle, SQL Server, MySQL 8+ years with big data, Hadoop, Data Lake, Spark in a cloud environment(AWS) 8+ years with commercial ETL tools DBT, Matillion etc 8+ years delivering ETL solutions from source systems, databases, APIs, flat-files, JSON Experience developing Entity Relationship Diagrams with Erwin, SQLDBM, or equivalent Experience working with job scheduling and monitoring systems (Airflow, Datadog, AWS SNS) Familiarity with Gen AI tools like Git Copilot and dbt copilot. Good understanding of Gen AI Application frameworks. Knowledge on any agentic platforms Experience building BI Dashboards with tools like Tableau Experience in the financial domain, master data management(MDM), sales and marketing, accounts payable, accounts receivable, invoicing Experience managing work assignments using tools like Jira and Confluence Experience with Scrum/Agile methodologies Ability to work independently and as part of a team Excellent analytical and problem solving and communication skills Life at Docusign Working here Docusign is committed to building trust and making the world more agreeable for our employees, customers and the communities in which we live and work. You can count on us to listen, be honest, and try our best to do what s right, every day. At Docusign, everything is equal. We each have a responsibility to ensure every team member has an equal opportunity to succeed, to be heard, to exchange ideas openly, to build lasting relationships, and to do the work of their life. Best of all, you will be able to feel deep pride in the work you do, because your contribution helps us make the world better than we found it. And for that, you ll be loved by us, our customers, and the world in which we live. Accommodation Docusign is committed to providing reasonable accommodations for qualified individuals with disabilities in our job application procedures. for assistance. Applicant and Candidate Privacy Notice #LI-Hybrid #LI-SA4 ","qualifications":" Basic Bachelor s Degree in Computer Science, Data Analytics, Information Systems, etc Experience developing data pipelines in one of the following languages: Python or Java 8+ years dimensional and relational data modeling experience Excellent SQL and database management skills Preferred 8+ years in data warehouse engineering (OLAP) Snowflake, BigQuery, Teradata 8+ years with transactional databases (OLTP) Oracle, SQL Server, MySQL 8+ years with big data, Hadoop, Data Lake, Spark in a cloud environment(AWS) 8+ years with commercial ETL tools DBT, Matillion etc 8+ years delivering ETL solutions from source systems, databases, APIs, flat-files, JSON Experience developing Entity Relationship Diagrams with Erwin, SQLDBM, or equivalent Experience working with job scheduling and monitoring systems (Airflow, Datadog, AWS SNS) Familiarity with Gen AI tools like Git Copilot and dbt copilot. Good understanding of Gen AI Application frameworks. Knowledge on any agentic platforms Experience building BI Dashboards with tools like Tableau Experience in the financial domain, master data management(MDM), sales and marketing, accounts payable, accounts receivable, invoicing Experience managing work assignments using tools like Jira and Confluence Experience with Scrum/Agile methodologies Ability to work independently and as part of a team Excellent analytical and problem solving and communication skills ","responsibilities":" DocuSign is seeking a talented and results oriented Data Engineer to focus on delivering trusted data to the business. As a member of the Global Data Analytics (GDA) T eam, the Data Engineer leverages a variety of technologies to design, develop and deliver new features in addition to loading, transforming and preparing data sets of all shapes and sizes for teams around the world. During a typical day, the Engineer will spend time developing new features to analyze data, develop solutions and load tested data sets into the Snowflake Enterprise Data Warehouse. The ideal candidate will demonstrate a positive can do attitude, a passion for learning and growing, and the drive to work hard and get the job done in a timely fashion. This individual contributor position provides plenty of room to grow -- a mix of challenging assignments, a chance to work with a world-class team, and the opportunity to use innovative technologies such as AWS, Snowflake, dbt, Airflow and Matillion. This position is an individual contributor role reporting to the Manager, Data Engineering. Responsibility Design, develop and maintain scalable and efficient data pipelines Analyze and Develop data quality and validation procedures Work with stakeholders to understand the data requirements and provide solutions Troubleshoot and resolve data issues in a timely manner Learn and leverage available AI tools for increased developer productivity Collaborate with cross-functional teams to ingest data from various sources Evaluate and improve data architecture and processes continuously Own, monitor, and improve solutions to ensure SLAs are met Develop and maintain documentation for Data infrastructure and processes Executes projects using Agile Scrum methodologies and be a team player
Posted 2 months ago
5.0 - 10.0 years
10 - 15 Lacs
Bengaluru
Work from Office
What youll do Docusign is seeking a talented and results oriented Data Engineer to focus on delivering trusted data to the business. As a member of the Global Data Analytics (GDA) Team, the Data Engineer leverages a variety of technologies to design, develop and deliver new features in addition to loading, transforming and preparing data sets of all shapes and sizes for teams around the world. During a typical day, the Engineer will spend time developing new features to analyze data, develop solutions and load tested data sets into the Snowflake Enterprise Data Warehouse. The ideal candidate will demonstrate a positive can do attitude, a passion for learning and growing, and the drive to work hard and get the job done in a timely fashion. This individual contributor position provides plenty of room to grow -- a mix of challenging assignments, a chance to work with a world-class team, and the opportunity to use innovative technologies such as AWS, Snowflake, dbt, Airflow and Matillion. This position is an individual contributor role reporting to the Manager, Data Engineering. Responsibility Design, develop and maintain scalable and efficient data pipelines Analyze and Develop data quality and validation procedures. Work with stakeholders to understand the data requirements and provide solutions Troubleshoot and resolve data issues in a timely manner Learn and leverage available AI tools for increased developer productivity Collaborate with cross-functional teams to ingest data from various sources Evaluate and improve data architecture and processes continuously Own, monitor, and improve solutions to ensure SLAs are met Develop and maintain documentation for Data infrastructure and processes Executes projects using Agile Scrum methodologies and be a team player Job Designation Hybrid: Employee divides their time between in-office and remote work. Access to an office location is required. (Frequency: Minimum 2 days per week; may vary by team but will be weekly in-office expectation) Positions at Docusign are assigned a job designation of either In Office, Hybrid or Remote and are specific to the role/job. Preferred job designations are not guaranteed when changing positions within Docusign. Docusign reserves the right to change a positions job designation depending on business needs and as permitted by local law. What you bring Basic Bachelor s Degree in Computer Science, Data Analytics, Information Systems, etc Experience developing data pipelines in one of the following languages: Python or Java 5+ years dimensional and relational data modeling experience Excellent SQL and database management skills Preferred 5+ years in data warehouse engineering (OLAP) Snowflake, BigQuery, Teradata, Redshift 5+ years with transactional databases (OLTP) Oracle, SQL Server, MySQL 5+ years with big data, Hadoop, Data Lake, Spark in a cloud environment(AWS) 5+ years with commercial ETL tools DBT, Matillion etc 5+ years delivering ETL solutions from source systems, databases, APIs, flat-files, JSON Experience developing Entity Relationship Diagrams with Erwin, SQLDBM, or equivalent Experience working with job scheduling and monitoring systems (Airflow, Datadog, AWS SNS) Familiarity with Gen AI tools like Git Copilot and dbt copilot. Good understanding of Gen AI Application frameworks. Knowledge on any agentic platforms Experience building BI Dashboards with tools like Tableau Experience in the financial domain, sales and marketing, accounts payable, accounts receivable, invoicing Experience managing work assignments using tools like Jira and Confluence Experience with Scrum/Agile methodologies Ability to work independently and as part of a team Excellent analytical and problem solving and communication skills Life at Docusign Working here Docusign is committed to building trust and making the world more agreeable for our employees, customers and the communities in which we live and work. You can count on us to listen, be honest, and try our best to do what s right, every day. At Docusign, everything is equal. We each have a responsibility to ensure every team member has an equal opportunity to succeed, to be heard, to exchange ideas openly, to build lasting relationships, and to do the work of their life. Best of all, you will be able to feel deep pride in the work you do, because your contribution helps us make the world better than we found it. And for that, you ll be loved by us, our customers, and the world in which we live. Accommodation Docusign is committed to providing reasonable accommodations for qualified individuals with disabilities in our job application procedures. for assistance. Applicant and Candidate Privacy Notice #LI-Hybrid #LI-SA4 ","qualifications":" Basic Bachelor s Degree in Computer Science, Data Analytics, Information Systems, etc Experience developing data pipelines in one of the following languages: Python or Java 5+ years dimensional and relational data modeling experience Excellent SQL and database management skills Preferred 5+ years in data warehouse engineering (OLAP) Snowflake, BigQuery, Teradata, Redshift 5+ years with transactional databases (OLTP) Oracle, SQL Server, MySQL 5+ years with big data, Hadoop, Data Lake, Spark in a cloud environment(AWS) 5+ years with commercial ETL tools DBT, Matillion etc 5+ years delivering ETL solutions from source systems, databases, APIs, flat-files, JSON Experience developing Entity Relationship Diagrams with Erwin, SQLDBM, or equivalent Experience working with job scheduling and monitoring systems (Airflow, Datadog, AWS SNS) Familiarity with Gen AI tools like Git Copilot and dbt copilot. Good understanding of Gen AI Application frameworks. Knowledge on any agentic platforms Experience building BI Dashboards with tools like Tableau Experience in the financial domain, sales and marketing, accounts payable, accounts receivable, invoicing Experience managing work assignments using tools like Jira and Confluence Experience with Scrum/Agile methodologies Ability to work independently and as part of a team Excellent analytical and problem solving and communication skills ","responsibilities":" Docusign is seeking a talented and results oriented Data Engineer to focus on delivering trusted data to the business. As a member of the Global Data Analytics (GDA) Team, the Data Engineer leverages a variety of technologies to design, develop and deliver new features in addition to loading, transforming and preparing data sets of all shapes and sizes for teams around the world. During a typical day, the Engineer will spend time developing new features to analyze data, develop solutions and load tested data sets into the Snowflake Enterprise Data Warehouse. The ideal candidate will demonstrate a positive can do attitude, a passion for learning and growing, and the drive to work hard and get the job done in a timely fashion. This individual contributor position provides plenty of room to grow -- a mix of challenging assignments, a chance to work with a world-class team, and the opportunity to use innovative technologies such as AWS, Snowflake, dbt, Airflow and Matillion. This position is an individual contributor role reporting to the Manager, Data Engineering. Responsibility Design, develop and maintain scalable and efficient data pipelines Analyze and Develop data quality and validation procedures. Work with stakeholders to understand the data requirements and provide solutions Troubleshoot and resolve data issues in a timely manner Learn and leverage available AI tools for increased developer productivity Collaborate with cross-functional teams to ingest data from various sources Evaluate and improve data architecture and processes continuously Own, monitor, and improve solutions to ensure SLAs are met Develop and maintain documentation for Data infrastructure and processes Executes projects using Agile Scrum methodologies and be a team player
Posted 2 months ago
2.0 years
0 Lacs
Pune/Pimpri-Chinchwad Area
On-site
Job Description Senior Associate, Full Stack Engineer At BNY, our culture empowers you to grow and succeed. As a leading global financial services company at the center of the world’s financial system we touch nearly 20% of the world’s investible assets. Every day around the globe, our 50,000+ employees bring the power of their perspective to the table to create solutions with our clients that benefit businesses, communities and people everywhere. We continue to be a leader in the industry, awarded as a top home for innovators and for creating an inclusive workplace. Through our unique ideas and talents, together we help make money work for the world. This is what is all about. We’re seeking a future team member for the role Senior Associate, Full Stack Engineer to join our Compliance engineering. This role is in Pune, MH–HYBRID. In this role, you’ll make an impact in the following ways: Overall 2-6 years of experience with ETL, Databases, Data warehouses etc. Need to have in-depth technical knowledge as a Pentaho ETL developer and should feel comfortable working within large internal and external data sets. Experience in OLAP and OLTP and data warehousing and data model concepts Having good experience in Vertica, Oracle, Denodo and similar kind of databases Experienced in Design, Development, and Implementation of large - scale projects in financial industries using Data Warehousing ETL tools (Pentaho) Experience in creating ETL transformations and jobs using Pentaho Kettle Spoon designer and Pentaho Data Integration Designer and scheduling. Proficient in writing - SQL Statements, Complex Stored Procedures, Dynamic SQL queries, Batches, Scripts, Functions, Triggers, Views, Cursors and Query Optimization Excellent data analysis skills Working knowledge of source control tools such as GitLab Good Analytical skills. Good in PDI architecture Having good experience in Splunk is plus. To be successful in this role, we’re seeking the following: Graduates of bachelor’s degree programs in business, related discipline, or equivalent work experience Relevant domain expertise in alternative investment Services domain or capital markets and financial services domain is required. At BNY, our culture speaks for itself. Here’s a few of our awards: America’s Most Innovative Companies, Fortune, 2024 World’s Most Admired Companies, Fortune 2024 Human Rights Campaign Foundation, Corporate Equality Index, 100% score, 2023-2024 Best Places to Work for Disability Inclusion , Disability: IN – 100% score, 2023-2024 “Most Just Companies”, Just Capital and CNBC, 2024 Dow Jones Sustainability Indices, Top performing company for Sustainability, 2024 Bloomberg’s Gender Equality Index (GEI), 2023 Our Benefits And Rewards BNY offers highly competitive compensation, benefits, and wellbeing programs rooted in a strong culture of excellence and our pay-for-performance philosophy. We provide access to flexible global resources and tools for your life’s journey. Focus on your health, foster your personal resilience, and reach your financial goals as a valued member of our team, along with generous paid leaves, including paid volunteer time, that can support you and your family through moments that matter. BNY is an Equal Employment Opportunity/Affirmative Action Employer - Underrepresented racial and ethnic groups/Females/Individuals with Disabilities/Protected Veterans. Show more Show less
Posted 2 months ago
4.0 - 6.0 years
3 - 9 Lacs
Hyderābād
On-site
India - Hyderabad JOB ID: R-216342 LOCATION: India - Hyderabad WORK LOCATION TYPE: On Site DATE POSTED: May. 30, 2025 CATEGORY: Information Systems Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Senior Data Engineer What you will do Let’s do this. Let’s change the world. In this vital role We are looking for highly motivated expert Senior Data Engineer who can own the design & development of complex data pipelines, solutions and frameworks. The ideal candidate will be responsible to design, develop, and optimize data pipelines, data integration frameworks, and metadata-driven architectures that enable seamless data access and analytics. This role prefers deep expertise in big data processing, distributed computing, data modeling, and governance frameworks to support self-service analytics, AI-driven insights, and enterprise-wide data management. Roles & Responsibilities: Design, develop, and maintain scalable ETL/ELT pipelines to support structured, semi-structured, and unstructured data processing across the Enterprise Data Fabric. Implement real-time and batch data processing solutions, integrating data from multiple sources into a unified, governed data fabric architecture. Optimize big data processing frameworks using Apache Spark, Hadoop, or similar distributed computing technologies to ensure high availability and cost efficiency. Work with metadata management and data lineage tracking tools to enable enterprise-wide data discovery and governance. Ensure data security, compliance, and role-based access control (RBAC) across data environments. Optimize query performance, indexing strategies, partitioning, and caching for large-scale data sets. Develop CI/CD pipelines for automated data pipeline deployments, version control, and monitoring. Implement data virtualization techniques to provide seamless access to data across multiple storage systems. Collaborate with cross-functional teams, including data architects, business analysts, and DevOps teams, to align data engineering strategies with enterprise goals. Stay up to date with emerging data technologies and best practices, ensuring continuous improvement of Enterprise Data Fabric architecture. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master’s degree and 4to 6 years of Computer Science, IT or related field experience OR Bachelor’s degree and 6 to 8 years of Computer Science, IT or related field experience AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Preferred Qualifications: Must-Have Skills: Hands-on experience in data engineering technologies such as Databricks, PySpark, SparkSQL Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies. Proficiency in workflow orchestration, performance tuning on big data processing. Strong understanding of AWS services Experience with Data Fabric, Data Mesh, or similar enterprise-wide data architectures. Ability to quickly learn, adapt and apply new technologies Strong problem-solving and analytical skills Excellent communication and collaboration skills Experience with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices. Good-to-Have Skills: Good to have deep expertise in Biotech & Pharma industries Experience in writing APIs to make the data available to the consumers Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops. Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 2 months ago
5.0 years
8 - 10 Lacs
Thiruvananthapuram
On-site
5 - 7 Years 1 Opening Kochi, Trivandrum Role description Role Proficiency: Provide expertise on data analysis techniques using software tools. Under supervision streamline business processes. Outcomes: Design and manage the reporting environment; which include data sources security and metadata. Provide technical expertise on data storage structures data mining and data cleansing. Support the data warehouse in identifying and revising reporting requirements. Support initiatives for data integrity and normalization. Assess tests and implement new or upgraded software. Assist with strategic decisions on new systems. Generate reports from single or multiple systems. Troubleshoot the reporting database environment and associated reports. Identify and recommend new ways to streamline business processes Illustrate data graphically and translate complex findings into written text. Locate results to help clients make better decisions. Solicit feedback from clients and build solutions based on feedback. Train end users on new reports and dashboards. Set FAST goals and provide feedback on FAST goals of repartees Measures of Outcomes: Quality - number of review comments on codes written Data consistency and data quality. Number of medium to large custom application data models designed and implemented Illustrates data graphically; translates complex findings into written text. Number of results located to help clients make informed decisions. Number of business processes changed due to vital analysis. Number of Business Intelligent Dashboards developed Number of productivity standards defined for project Number of mandatory trainings completed Outputs Expected: Determine Specific Data needs: Work with departmental managers to outline the specific data needs for each business method analysis project Critical business insights: Mines the business’s database in search of critical business insights; communicates findings to relevant departments. Code: Creates efficient and reusable SQL code meant for the improvement manipulation and analysis of data. Creates efficient and reusable code. Follows coding best practices. Create/Validate Data Models: Builds statistical models; diagnoses validates and improves the performance of these models over time. Predictive analytics: Seeks to determine likely outcomes by detecting tendencies in descriptive and diagnostic analysis Prescriptive analytics: Attempts to identify what business action to take Code Versioning: Organize and manage the changes and revisions to code. Use a version control tool for example git bitbucket. etc. Create Reports: Create reports depicting the trends and behaviours from analyzed data Document: Create documentation for worked performed. Additionally perform peer reviews of documentation of others' work Manage knowledge: Consume and contribute to project related documents share point libraries and client universities Status Reporting: Report status of tasks assigned Comply with project related reporting standards and processes Skill Examples: Analytical Skills: Ability to work with large amounts of data: facts figures and number crunching. Communication Skills: Communicate effectively with a diverse population at various organization levels with the right level of detail. Critical Thinking: Data Analysts must review numbers trends and data to come up with original conclusions based on the findings. Presentation Skills - facilitates reports and oral presentations to senior colleagues Strong meeting facilitation skills as well as presentation skills. Attention to Detail: Vigilant in the analysis to determine accurate conclusions. Mathematical Skills to estimate numerical data. Work in a team environment Proactively ask for and offer help Knowledge Examples: Knowledge Examples Database languages such as SQL Programming language such as R or Python Analytical tools and languages such as SAS & Mahout. Proficiency in MATLAB. Data visualization software such as Tableau or Qlik. Proficient in mathematics and calculations. Efficiently with spreadsheet tools such as Microsoft Excel or Google Sheets DBMS Operating Systems and software platforms Knowledge regarding customer domain and sub domain where problem is solved Additional Comments: • Over 6+ years of experience in developing BI applications utilizing SQL server/ SF/ GCP/ PostgreSQL, BI stack, Power BI, and Tableau. • Practical understanding of the Data modelling (Dimensional & Relational) concepts like Star-Schema Modelling, Snowflake Schema Modelling, Fact and Dimension tables. • Ability to translate the business requirements into workable functional and non-functional requirements. • Capable of taking ownership and communicating with C Suite executives & Stakeholders. • Extensive database programming experience in writing T-SQL, User Defined Functions, Triggers, Views, Temporary Tables Constraints, and Indexes using various DDL and DML commands. • Experienced in creating SSAS based OLAP Cubes and writing complex DAX. • Ability to work with external tools like Tabular Editor and DAX Studio. • Understand complex and customize Stored Procedures and Queries for implementing business logic and process in backend, for data extraction. • Hands on experience in Incremental refresh, RLS, Parameterization, Dataflows and Gateways. • Experience in Design, development of Business Intelligence Solutions using SSRS and Power BI • Experience in optimization of PBI reports implementing Mixed and Direct Query modes. Skills Power Bi,Power Tools,Data Analysis,Fabric About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.
Posted 2 months ago
3.0 years
0 Lacs
Noida
On-site
Our Company Changing the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen. We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours! The Opportunity Business Intelligence Specialist will work closely with Business analysts, understand the design specifications and translate the requirements into technical model, dashboards, reports and applications. BI Specialist will be required to work directly with business user and cater to their ad-hoc requests from time to time. What you'll Do Collaborate with the Business users to elicit business requirements pertaining to Applications and Business Reporting, Dashboards, Ad-hoc analysis etc. System integration of heterogeneous data sources and working on technologies used in the design, development, testing, deployment, and operations of DW & BI solutions Create technical documents, architecture designs and data flow diagrams Help to deliver scalable solutions on the MSBI platforms Implement source code versioning, standard methodology and processes for ensuring data and code quality Collaborate with business partners, application developers and the technical team. What you need to succeed At least 3 years of experience in SSIS, SSAS, Data Warehousing, Data Analysis, and Business Intelligence Technical Skills 3-6 years of Advanced proficiency in Data Warehousing tools and technologies, including databases, SSIS, and SSAS. In-depth understanding of Data Warehousing principles, Business Intelligence methodologies, and Dimensional Modeling techniques. Hands-on experience in designing, developing, and maintaining ETL processes Building and optimizing databases , OLAP schemas , and public objects (Attributes, Facts, Metrics, etc.) Strong expertise in Performance tuning and query optimization for high-efficiency operations Familiar with cloud platforms such as Azure and AWS Familiar with Python or PySpark and Databricks (Optional) Experience in creating interactive dashboards using Power BI Soft Skills Strong problem-solving and analytical abilities Quick learner with the ability to understand diverse business domains and performance indicators Excellent communication and presentation skills Education : Bachelor’s degree in Computer Science, Information Technology, or an equivalent technical discipline. Adobe is proud to be an Equal Employment Opportunity and affirmative action employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe.com or call (408) 536-3015. Adobe values a free and open marketplace for all employees and has policies in place to ensure that we do not enter into illegal agreements with other companies to not recruit or hire each other’s employees. Adobe is proud to be an Equal Employment Opportunity employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe.com or call (408) 536-3015.
Posted 2 months ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
We are looking for a Data Modeller / Data Modeler to lead data architecture efforts across enterprise domains such as Sales, Procurement, Finance, Logistics, R&D, and Advanced Planning Systems (SAP/Oracle). The role involves designing scalable and reusable data models, building data lake foundations, and collaborating with cross-functional teams to deliver robust end-to-end data solutions. Key Responsibilities Work with business/product teams to understand processes and translate into technical specifications. Design logical and physical data models based on Medallion Architecture, EDW, or Kimball methodologies. Source the correct grain of data from true source systems or existing DWHs. Create and manage reusable intermediary data models and physical views for reporting/consumption. Understand and implement Data Governance, Data Quality, and Data Observability practices. Develop business process maps, user journey maps, and data flow/integration diagrams. Design integration workflows using APIs, FTP/SFTP, web services, etc. Support large-scale implementation programs involving multiple projects over extended periods. Coordinate with data engineers, product owners, data modelers, governance teams, and project stakeholders. Technical Skills Minimum 5+ years in data-focused projects (migration, upgradation, lakehouse/DWH builds). Strong expertise in Data Modelling – Logical, Physical, Dimensional, and Vault modeling. Experience with enterprise data domains: Sales, Finance, Procurement, Supply Chain, Logistics, R&D. Tools: Erwin or similar data modeling tools. Deep understanding of OLTP and OLAP systems. Familiar with Kimball methodology, Medallion architecture, and modern Data Lakehouse patterns. Knowledge of Bronze, Silver, and Gold layer architecture in cloud platforms. Ability to read existing data dictionaries, table structures, and normalize data tables effectively. Cloud, DevOps & Integration Familiarity with cloud data platforms (AWS, Azure, GCP) and DevOps/DataOps best practices. Experience with Agile methodologies and participation in Scrum ceremonies. Understand end-to-end integration needs and methods (API, FTP, SFTP, web services). Preferred Experience Background in Retail, CPG, or Supply Chain domains is a strong plus. Experience with data governance frameworks, quality tools, and metadata management platforms. Skills: ftp/sftp,physical data models,data modelling,devops,data modeler,data observability,physical data modeling,cloud platforms,apis,erwin,data lakehouse,vault modeling,dimensional modeling,web services,data modeling,data governance,architecture,data quality,retail,cpg,kimball methodology,medallion architecture,olap,supply chain,logical data models,logical data modeling,integration workflows,online transaction processing (oltp) Show more Show less
Posted 2 months ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We are looking for a Data Modeller / Data Modeler to lead data architecture efforts across enterprise domains such as Sales, Procurement, Finance, Logistics, R&D, and Advanced Planning Systems (SAP/Oracle). The role involves designing scalable and reusable data models, building data lake foundations, and collaborating with cross-functional teams to deliver robust end-to-end data solutions. Key Responsibilities Work with business/product teams to understand processes and translate into technical specifications. Design logical and physical data models based on Medallion Architecture, EDW, or Kimball methodologies. Source the correct grain of data from true source systems or existing DWHs. Create and manage reusable intermediary data models and physical views for reporting/consumption. Understand and implement Data Governance, Data Quality, and Data Observability practices. Develop business process maps, user journey maps, and data flow/integration diagrams. Design integration workflows using APIs, FTP/SFTP, web services, etc. Support large-scale implementation programs involving multiple projects over extended periods. Coordinate with data engineers, product owners, data modelers, governance teams, and project stakeholders. Technical Skills Minimum 5+ years in data-focused projects (migration, upgradation, lakehouse/DWH builds). Strong expertise in Data Modelling – Logical, Physical, Dimensional, and Vault modeling. Experience with enterprise data domains: Sales, Finance, Procurement, Supply Chain, Logistics, R&D. Tools: Erwin or similar data modeling tools. Deep understanding of OLTP and OLAP systems. Familiar with Kimball methodology, Medallion architecture, and modern Data Lakehouse patterns. Knowledge of Bronze, Silver, and Gold layer architecture in cloud platforms. Ability to read existing data dictionaries, table structures, and normalize data tables effectively. Cloud, DevOps & Integration Familiarity with cloud data platforms (AWS, Azure, GCP) and DevOps/DataOps best practices. Experience with Agile methodologies and participation in Scrum ceremonies. Understand end-to-end integration needs and methods (API, FTP, SFTP, web services). Preferred Experience Background in Retail, CPG, or Supply Chain domains is a strong plus. Experience with data governance frameworks, quality tools, and metadata management platforms. Skills: ftp/sftp,physical data models,data modelling,devops,data modeler,data observability,physical data modeling,cloud platforms,apis,erwin,data lakehouse,vault modeling,dimensional modeling,web services,data modeling,data governance,architecture,data quality,retail,cpg,kimball methodology,medallion architecture,olap,supply chain,logical data models,logical data modeling,integration workflows,online transaction processing (oltp) Show more Show less
Posted 2 months ago
5.0 years
0 Lacs
Greater Kolkata Area
On-site
We are looking for a Data Modeller / Data Modeler to lead data architecture efforts across enterprise domains such as Sales, Procurement, Finance, Logistics, R&D, and Advanced Planning Systems (SAP/Oracle). The role involves designing scalable and reusable data models, building data lake foundations, and collaborating with cross-functional teams to deliver robust end-to-end data solutions. Key Responsibilities Work with business/product teams to understand processes and translate into technical specifications. Design logical and physical data models based on Medallion Architecture, EDW, or Kimball methodologies. Source the correct grain of data from true source systems or existing DWHs. Create and manage reusable intermediary data models and physical views for reporting/consumption. Understand and implement Data Governance, Data Quality, and Data Observability practices. Develop business process maps, user journey maps, and data flow/integration diagrams. Design integration workflows using APIs, FTP/SFTP, web services, etc. Support large-scale implementation programs involving multiple projects over extended periods. Coordinate with data engineers, product owners, data modelers, governance teams, and project stakeholders. Technical Skills Minimum 5+ years in data-focused projects (migration, upgradation, lakehouse/DWH builds). Strong expertise in Data Modelling – Logical, Physical, Dimensional, and Vault modeling. Experience with enterprise data domains: Sales, Finance, Procurement, Supply Chain, Logistics, R&D. Tools: Erwin or similar data modeling tools. Deep understanding of OLTP and OLAP systems. Familiar with Kimball methodology, Medallion architecture, and modern Data Lakehouse patterns. Knowledge of Bronze, Silver, and Gold layer architecture in cloud platforms. Ability to read existing data dictionaries, table structures, and normalize data tables effectively. Cloud, DevOps & Integration Familiarity with cloud data platforms (AWS, Azure, GCP) and DevOps/DataOps best practices. Experience with Agile methodologies and participation in Scrum ceremonies. Understand end-to-end integration needs and methods (API, FTP, SFTP, web services). Preferred Experience Background in Retail, CPG, or Supply Chain domains is a strong plus. Experience with data governance frameworks, quality tools, and metadata management platforms. Skills: ftp/sftp,physical data models,data modelling,devops,data modeler,data observability,physical data modeling,cloud platforms,apis,erwin,data lakehouse,vault modeling,dimensional modeling,web services,data modeling,data governance,architecture,data quality,retail,cpg,kimball methodology,medallion architecture,olap,supply chain,logical data models,logical data modeling,integration workflows,online transaction processing (oltp) Show more Show less
Posted 2 months ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
We are looking for a Data Modeller / Data Modeler to lead data architecture efforts across enterprise domains such as Sales, Procurement, Finance, Logistics, R&D, and Advanced Planning Systems (SAP/Oracle). The role involves designing scalable and reusable data models, building data lake foundations, and collaborating with cross-functional teams to deliver robust end-to-end data solutions. Key Responsibilities Work with business/product teams to understand processes and translate into technical specifications. Design logical and physical data models based on Medallion Architecture, EDW, or Kimball methodologies. Source the correct grain of data from true source systems or existing DWHs. Create and manage reusable intermediary data models and physical views for reporting/consumption. Understand and implement Data Governance, Data Quality, and Data Observability practices. Develop business process maps, user journey maps, and data flow/integration diagrams. Design integration workflows using APIs, FTP/SFTP, web services, etc. Support large-scale implementation programs involving multiple projects over extended periods. Coordinate with data engineers, product owners, data modelers, governance teams, and project stakeholders. Technical Skills Minimum 5+ years in data-focused projects (migration, upgradation, lakehouse/DWH builds). Strong expertise in Data Modelling – Logical, Physical, Dimensional, and Vault modeling. Experience with enterprise data domains: Sales, Finance, Procurement, Supply Chain, Logistics, R&D. Tools: Erwin or similar data modeling tools. Deep understanding of OLTP and OLAP systems. Familiar with Kimball methodology, Medallion architecture, and modern Data Lakehouse patterns. Knowledge of Bronze, Silver, and Gold layer architecture in cloud platforms. Ability to read existing data dictionaries, table structures, and normalize data tables effectively. Cloud, DevOps & Integration Familiarity with cloud data platforms (AWS, Azure, GCP) and DevOps/DataOps best practices. Experience with Agile methodologies and participation in Scrum ceremonies. Understand end-to-end integration needs and methods (API, FTP, SFTP, web services). Preferred Experience Background in Retail, CPG, or Supply Chain domains is a strong plus. Experience with data governance frameworks, quality tools, and metadata management platforms. Skills: ftp/sftp,physical data models,data modelling,devops,data modeler,data observability,physical data modeling,cloud platforms,apis,erwin,data lakehouse,vault modeling,dimensional modeling,web services,data modeling,data governance,architecture,data quality,retail,cpg,kimball methodology,medallion architecture,olap,supply chain,logical data models,logical data modeling,integration workflows,online transaction processing (oltp) Show more Show less
Posted 2 months ago
3.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Hello! You've landed on this page, which means you're interested in working with us. Let's take a sneak peek at what it's like to work at Innovaccer. Engineering at Innovaccer With every line of code, we accelerate our customers' success, turning complex challenges into innovative solutions. Collaboratively, we transform each data point we gather into valuable insights for our customers. Join us and be part of a team that's turning dreams of better healthcare into reality, one line of code at a time. Together, we’re shaping the future and making a meaningful impact on the world. About the Role We at Innovaccer are looking for a Software Development Engineer-II (Backend) to build the most amazing product experience. You’ll get to work with other engineers to build delightful feature experiences to understand and solve our customer’s pain points. A Day in the Life Building efficient and reusable applications and abstractions. Identify and communicate back-end best practices. Participate in the project life-cycle from pitch/prototyping through definition and design to build, integration, QA and delivery Analyze and improve the performance, scalability, stability, and security of the product Improve engineering standards, tooling, and processes What You Need 3+ years of experience with a start-up mentality and high willingness to learn Expert in Python and experience with any web framework (Django, FastAPI, Flask etc) Aggressive problem diagnosis and creative problem-solving skill Expert in Kubernetes and containerization Experience in RDBMS & NoSQL database such as Postgres, MongoDB, (any OLAP database is good to have) Experience in cloud service providers such as AWS or Azure. Experience in Kafka, RabbitMQ, or other queuing services is good to have. Working experience in BigData / Distributed Systems and Async Programming Bachelor's degree in Computer Science/Software Engineering. Preferred Skills Expert in Python and any web framework. Experience in working with Kubernetes and any cloud provider(s). Any SQL or NoSQL database. Working experience in distributed systems. Here’s What We Offer Generous Leave Benefits: Enjoy generous leave benefits of up to 40 days. Parental Leave: Experience one of the industry's best parental leave policies to spend time with your new addition. Sabbatical Leave Policy: Want to focus on skill development, pursue an academic career, or just take a break? We've got you covered. Health Insurance: We offer health benefits and insurance to you and your family for medically related expenses related to illness, disease, or injury. Pet-Friendly Office*: Spend more time with your treasured friends, even when you're away from home. Bring your furry friends with you to the office and let your colleagues become their friends, too. *Noida office only Creche Facility for children*: Say goodbye to worries and hello to a convenient and reliable creche facility that puts your child's well-being first. *India offices Where and how we work Our Noida office is situated in a posh techspace, equipped with various amenities to support our work environment. Here, we follow a five-day work schedule, allowing us to efficiently carry out our tasks and collaborate effectively within our team. Innovaccer is an equal-opportunity employer. We celebrate diversity, and we are committed to fostering an inclusive and diverse workplace where all employees, regardless of race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, marital status, or veteran status, feel valued and empowered. Disclaimer: Innovaccer does not charge fees or require payment from individuals or agencies for securing employment with us. We do not guarantee job spots or engage in any financial transactions related to employment. If you encounter any posts or requests asking for payment or personal information, we strongly advise you to report them immediately to our HR department at px@innovaccer.com. Additionally, please exercise caution and verify the authenticity of any requests before disclosing personal and confidential information, including bank account details. Show more Show less
Posted 2 months ago
5.0 - 10.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Job Title: Data Architect / Delivery Lead Job Summary: The Data Architect / Delivery Lead will provide technical expertise in the analysis, design, development, rollout, and maintenance of enterprise data models and solutions, utilizing both traditional and emerging technologies such as cloud, Hadoop, NoSQL, and real-time data processing. In addition to technical expertise, the role requires leadership in driving cross-functional teams, ensuring seamless project delivery, and fostering innovation within the team. The candidate must excel in managing data architecture projects while mentoring teams in data engineering practices, including PySpark , automation, and big data integration. Essential Duties Data Architecture Design and Development: Design and develop conceptual, logical, and physical data models for enterprise-scale data lakes and data warehouse solutions, ensuring optimal performance and scalability. Implement real-time and batch data integration solutions using modern tools and technologies such as PySpark, Hadoop, and cloud-based solutions (e.g., AWS, Azure, Google Cloud). Utilize PySpark for distributed data processing, transforming and analyzing large datasets for improved data-driven decision-making. Understand and apply modern data architecture philosophies such as Data Vault, Dimensional Modeling, and Data Lake design for building scalable and sustainable data solutions. Leadership & Delivery Management: Provide leadership in data architecture and engineering projects, ensuring the integration of modern technologies and best practices in data management and transformation. Act as a trusted advisor, collaborating with business users, technical staff, and project managers to define requirements and deliver high-quality data solutions. Lead and mentor a team of data engineers, ensuring the effective application of PySpark for data engineering tasks, and supporting continuous learning and improvement within the team. Manage end-to-end delivery of data projects, including defining timelines, managing resources, and ensuring timely, high-quality delivery while adhering to project methodologies (e.g., Agile, Scrum). Data Movement & Integration: Provide expertise in data integration processes, including batch and real-time data processing using tools such as PySpark, Informatica PowerCenter, SSIS, MuleSoft, and DataStage. Develop and optimize ETL/ELT pipelines, utilizing PySpark for efficient data processing and transformation at scale, particularly for big data environments (e.g., Hadoop ecosystems). Oversee data migration efforts, ensuring high-quality and consistent data delivery while managing data transformation and cleansing processes. Documentation & Communication: Create comprehensive functional and technical documentation, including data integration architecture documentation, data models, data dictionaries, and testing plans. Collaborate with business stakeholders and technical teams to ensure alignment and provide technical guidance on data-related decisions. Prepare and present technical content and architectural decisions to senior management, ensuring clear communication of complex data concepts. Skills and Experience: Data Engineering Skills: Extensive experience in PySpark for large-scale data processing, data transformation, and working with distributed systems. Proficient in modern data processing frameworks and technologies, including Hadoop, Spark, and Flink. Expertise in cloud-based data engineering technologies and platforms such as AWS Glue, Azure Data Factory, or Google Cloud Dataflow. Strong experience with data pipelines, ETL/ELT frameworks, and automation techniques using tools like Airflow, Apache NiFi, or dbt. Expertise in working with big data technologies and frameworks for both structured and unstructured data. Data Architecture and Modeling: 5-10 years of experience in enterprise data modeling, including hands-on experience with ERwin, ER/Studio, PowerDesigner, or similar tools. Strong knowledge of relational databases (e.g., Oracle, SQL Server, Teradata) and NoSQL technologies (e.g., MongoDB, Cassandra). In-depth understanding of data warehousing and data integration best practices, including dimensional modeling and working with OLTP systems and OLAP cubes. Experience with real-time data architectures and cloud-based data lakes, leveraging AWS, Azure, or Google Cloud platforms. Leadership & Delivery Skills: 3-5 years of management experience leading teams of data engineers and architects, ensuring alignment of team goals with organizational objectives. Strong leadership qualities such as innovation, critical thinking, communication, time management, and the ability to collaborate effectively across teams and stakeholders. Proven ability to act as a delivery lead for data projects, driving projects from concept to completion while managing resources, timelines, and deliverables. Ability to mentor and coach team members in both technical and professional growth, fostering a culture of knowledge sharing and continuous improvement. Other Essential Skills: Strong knowledge of SQL, PL/SQL, and proficiency in scripting for data engineering tasks. Ability to translate business requirements into technical solutions, ensuring that the data solutions support business strategies and objectives. Hands-on experience with metadata management, data governance, and master data management (MDM) principles. Familiarity with modern agile methodologies, such as Scrum or Kanban, to ensure iterative and successful project delivery. Preferred Skills & Experience: Cloud Technologies: Experience with cloud data platforms such as AWS Redshift, Google BigQuery, or Azure Synapse for building scalable data solutions. Leadership: Demonstrated ability to build and lead cross-functional teams, drive innovation, and solve complex data problems. Business Consulting: Consulting experience working with clients to deliver tailored data solutions, providing expert guidance on data architecture and data management practices. Data Profiling and Analysis: Hands-on experience with data profiling tools and techniques to assess and improve the quality of enterprise data. Real-Time Data Processing: Experience in real-time data integration and streaming technologies, such as Kafka and Kinesis. Show more Show less
Posted 2 months ago
40.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Amgen Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. About The Role Role Description: Let’s do this. Let’s change the world. We are looking for highly motivated expert Data Engineer who can own the design & development of complex data pipelines, solutions and frameworks. The ideal candidate will be responsible to design, develop, and optimize data pipelines, data integration frameworks, and metadata-driven architectures that enable seamless data access and analytics. This role prefers deep expertise in big data processing, distributed computing, data modeling, and governance frameworks to support self-service analytics, AI-driven insights, and enterprise-wide data management. Roles & Responsibilities: Design, develop, and maintain complex ETL/ELT data pipelines in Databricks using PySpark, Scala, and SQL to process large-scale datasets Understand the biotech/pharma or related domains & build highly efficient data pipelines to migrate and deploy complex data across systems Design and Implement solutions to enable unified data access, governance, and interoperability across hybrid cloud environments Ingest and transform structured and unstructured data from databases (PostgreSQL, MySQL, SQL Server, MongoDB etc.), APIs, logs, event streams, images, pdf, and third-party platforms Ensuring data integrity, accuracy, and consistency through rigorous quality checks and monitoring Expert in data quality, data validation and verification frameworks Innovate, explore and implement new tools and technologies to enhance efficient data processing Proactively identify and implement opportunities to automate tasks and develop reusable frameworks Work in an Agile and Scaled Agile (SAFe) environment, collaborating with cross-functional teams, product owners, and Scrum Masters to deliver incremental value Use JIRA, Confluence, and Agile DevOps tools to manage sprints, backlogs, and user stories. Support continuous improvement, test automation, and DevOps practices in the data engineering lifecycle Collaborate and communicate effectively with the product teams, with cross-functional teams to understand business requirements and translate them into technical solutions Must-Have Skills: Hands-on experience in data engineering technologies such as Databricks, PySpark, SparkSQL Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies. Proficiency in workflow orchestration, performance tuning on big data processing. Strong understanding of AWS services Ability to quickly learn, adapt and apply new technologies Strong problem-solving and analytical skills Excellent communication and teamwork skills Experience with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices. Good-to-Have Skills: Data Engineering experience in Biotechnology or pharma industry Experience in writing APIs to make the data available to the consumers Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Education and Professional Certifications Any Degree and 6-8 years of experience AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation. Show more Show less
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France