Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 years
0 Lacs
India
Remote
At Mindrift, innovation meets opportunity. We believe in using the power of collective intelligence to ethically shape the future of AI. What We Do The Mindrift platform connects specialists with AI projects from major tech innovators. Our mission is to unlock the potential of Generative AI by tapping into real-world expertise from across the globe. About The Role GenAI models are improving very quickly, and one of our goals is to make them capable of addressing specialized questions and achieving complex reasoning skills. If you join the platform as an AI Tutor in Coding, you'll have the opportunity to collaborate on these projects. Although every project is unique, you might typically: Analyze and understand existing code in Python or C/C++ Migrate logic to idiomatic, safe Rust while preserving functionality Adapt or port the test suite and ensure behavioral equivalence Document migration steps and technical decisions How To Get Started Simply apply to this post, qualify, and get the chance to contribute to projects aligned with your skills, on your own schedule. From creating training prompts to refining model responses, you'll help shape the future of AI while ensuring technology benefits everyone. Requirements You have a Bachelor's or Master's degree in Software Development, Computer Science, or other related fields. You have at least 3 years of professional experience with C/C++ and 1+ year of hands-on experience with Rust You are experienced with FFI tools (bindgen, cxx) and unsafe Rust for C/C++interoperability You bring experience testing migrated code (unit/integration/fuzz tests) You demonstrate solid understanding of systems programming (memory management, concurrency) You are skilled at refactoring legacy code and documenting migration steps Prompt engineering experience is a strong plus Your level of English is advanced (C1) or above You are ready to learn new methods, able to switch between tasks and topics quickly and sometimes work with challenging, complex guidelines Our freelance role is fully remote so, you just need a laptop, internet connection, time available and enthusiasm to take on a challenge Benefits Why this freelance opportunity might be a great fit for you? Take part in a part-time, remote, freelance project that fits around your primary professional or academic commitments Work on advanced AI projects and gain valuable experience that enhances your portfolio Influence how future AI models understand and communicate in your field of expertise Show more Show less
Posted 2 weeks ago
4.0 years
0 Lacs
Andhra Pradesh, India
Remote
At PwC, our people in infrastructure focus on designing and implementing robust, secure IT systems that support business operations. They enable the smooth functioning of networks, servers, and data centres to optimise performance and minimise downtime. Those in cloud operations at PwC will focus on managing and optimising cloud infrastructure and services to enable seamless operations and high availability for clients. You will be responsible for monitoring, troubleshooting, and implementing industry leading practices for cloud-based systems. Focused on relationships, you are building meaningful client connections, and learning how to manage and inspire others. Navigating increasingly complex situations, you are growing your personal brand, deepening technical expertise and awareness of your strengths. You are expected to anticipate the needs of your teams and clients, and to deliver quality. Embracing increased ambiguity, you are comfortable when the path forward isn’t clear, you ask questions, and you use these moments as opportunities to grow. Skills Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Respond effectively to the diverse perspectives, needs, and feelings of others. Use a broad range of tools, methodologies and techniques to generate new ideas and solve problems. Use critical thinking to break down complex concepts. Understand the broader objectives of your project or role and how your work fits into the overall strategy. Develop a deeper understanding of the business context and how it is changing. Use reflection to develop self awareness, enhance strengths and address development areas. Interpret data to inform insights and recommendations. Uphold and reinforce professional and technical standards (e.g. refer to specific PwC tax and audit guidance), the Firm's code of conduct, and independence requirements. Azure Network Engineer – Senior Associate Basic Qualifications Job Requirements and Preferences : Minimum Degree Required Bachelor’s degree Minimum Years Of Experience 4 year(s) Certification(s) Required Cloud Solution Architect Certification in one of the Major Cloud Technologies: AWS, - Must have (Professional certification will be a plus) Focused Certification (At least 1 is must) Cloud Security Certification Cloud Network Certification Cloud Developer Certification Preferred Qualifications Preferred Knowledge/Skills: Demonstrates a Thorough Level Of Abilities With, And/or a Proven Record Of Success As Both An Individual Contributor And a Team Member, Identifying And Addressing Client Needs By Design, implement, and manage secure and scalable cloud network infrastructures using AWS services such as VPC, Route 53, Direct Connect, VPN, and EC2. Ensure the stability and integrity of in-house voice, data, video, and wireless network services. Develop and maintain documentation related to network configuration, mapping, processes, and service records. Implement and support firewalls, site-to-site VPNs, and remote-access VPNs. Monitor network performance and troubleshoot issues as needed. Collaborate with executive management and department leaders to assess near- and long-term network capacity needs. Participate in managing network security solutions and perform server and security audits. Managing and support the Dev to Production cloud IaaS and platform, to establish quality, performance, and availability of hosted services. Providing guidance and support for cloud technology practitioners (Application Development teams) Running and maintaining production services. Working on high volume mission critical systems. Providing on call support for Production cloud Environments. Working hands-on with customers to develop, migrate, and debug services issues. Providing updated server/process documentation and as appropriate, creating documentation where none may exist. Focusing on rapid identification and resolution of customer issues. Answering questions and perform initial triage on problem reports. Providing first/second level cloud environment support. Working very closely with application users to troubleshoot and resolve cloud hosted applications or system issues. Informing Technical Support Management about any escalations or difficult situations that require his/her involvement. Providing Cloud customers with an industry leading customer experience when engaging Technical Support. Assisting in Tier 2 triage, troubleshooting, remediation, and escalation of tickets tied to the product support function. Training and supporting junior team members in resolving product support tickets. Identifying ways to optimize the product support function. Coordinating with Tier 3 support to establish and manage clear escalation guidelines for supported system components. Running database queries to lookup, resolve, issues. Demonstrating proven communication and collaboration skills to coordinate with developers and application team to negotiate and schedule patching windows; and, Demonstrating experience in managing the monthly Windows or Linux environment patching. Analyzing existing application portfolios, develop next-gen application architecture, transformation, and modernization roadmap Conducting cloud solutions architecture review/audit and create review/audit report Leading implementation of the solution from establishing project requirements and goals to solution "go-live" Leading the client through the technical and organizational challenges of Cloud transformation Maintaining a strong understanding of industry trends and best practices Creating thought leadership on cloud solutions and hybrid clouds Serving as a cloud evangelist, consult, and provide technical guidance on cloud solutions design, build, governance, security, operations, and cost control best practices What You Have Proficiency in AWS services including VPC, EC2, Route 53, CloudFront, and other networking-related services. Expert knowledge of network protocols and services such as TCP/IP, DNS, DHCP, BGP, and OSPF. Experience with network automation using AWS CloudFormation, Terraform, or Ansible. Familiarity with AWS security features like Network Access Control Lists, Security Groups, AWS Shield, and AWS WAF. Working knowledge of current network hardware, protocols, and Internet standards. Experience with network monitoring and analysis software. Competence with testing tools and procedures for voice and data circuits. Experience designing, implementing, managing, and supporting enterprise-level IP networks. Show more Show less
Posted 2 weeks ago
3.0 years
0 Lacs
Mumbai Metropolitan Region
Remote
At Mindrift, innovation meets opportunity. We believe in using the power of collective intelligence to ethically shape the future of AI. What We Do The Mindrift platform connects specialists with AI projects from major tech innovators. Our mission is to unlock the potential of Generative AI by tapping into real-world expertise from across the globe. About The Role GenAI models are improving very quickly, and one of our goals is to make them capable of addressing specialized questions and achieving complex reasoning skills. If you join the platform as an AI Tutor in Coding, you'll have the opportunity to collaborate on these projects. Although every project is unique, you might typically: Analyze and understand existing code in Python or C/C++ Migrate logic to idiomatic, safe Rust while preserving functionality Adapt or port the test suite and ensure behavioral equivalence Document migration steps and technical decisions How To Get Started Simply apply to this post, qualify, and get the chance to contribute to projects aligned with your skills, on your own schedule. From creating training prompts to refining model responses, you'll help shape the future of AI while ensuring technology benefits everyone. Requirements You have a Bachelor's or Master's degree in Software Development, Computer Science, or other related fields. You have at least 3 years of professional experience with C/C++ and 1+ year of hands-on experience with Rust You are experienced with FFI tools (bindgen, cxx) and unsafe Rust for C/C++interoperability You bring experience testing migrated code (unit/integration/fuzz tests) You demonstrate solid understanding of systems programming (memory management, concurrency) You are skilled at refactoring legacy code and documenting migration steps Prompt engineering experience is a strong plus Your level of English is advanced (C1) or above You are ready to learn new methods, able to switch between tasks and topics quickly and sometimes work with challenging, complex guidelines Our freelance role is fully remote so, you just need a laptop, internet connection, time available and enthusiasm to take on a challenge Benefits Why this freelance opportunity might be a great fit for you? Take part in a part-time, remote, freelance project that fits around your primary professional or academic commitments Work on advanced AI projects and gain valuable experience that enhances your portfolio Influence how future AI models understand and communicate in your field of expertise Show more Show less
Posted 2 weeks ago
4.0 - 6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Sr. Data Engineer – Veeva Integration Lead What You Will Do Let’s do this. Let’s change the world. In this vital role you will be responsible for developing and maintaining the overall data architecture and integration of Amgen’s Veeva Vault Platform. This role involves defining the data integrations vision, creating roadmaps, and ensuring that IT strategies align with business goals. The role will be working closely with collaborators to understand requirements, develop data integration blueprints, and ensure that solutions are scalable, secure, and aligned with enterprise standards. The role will be involved in defining the Veeva Vault Platform data integration, guiding technology decisions, and ensuring that all implementations adhere to established architectural principles, Veeva’s and the Industry’s standard methodologies. Collaborate with broader collaborator community with their data needs, including data quality, data access controls, compliance with privacy and security regulations. Works with Enterprise MDM and Reference Data to implement standards and data reusability. Contribute and support consistency to data governance principles. Maintain documentation on data definitions, data flows, common data models, data harmonization etc. Partner with business teams to identify compliance requirements with data privacy, security, and regulatory policies for the assigned domains Build strong relationships with key business leads and partners to ensure their needs are met. Be a key team member that assists in design and development of the data pipeline for Veeva Vault platform Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Identify and resolve complex data-related challenges Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation Work with data engineers on data quality assessment, data cleansing and data analytics Share and discuss findings with team members practicing SAFe Agile delivery model Automate and Optimize data pipeline and framework for easier and cost-effective development process. Advice and support project teams (project managers, architects, business analysts, and developers) on cloud platforms (AWS, Databricks preferred), tools, technology, and methodology related to the design, build scalable, efficient and maintain Data Lake and other Big Data solutions Experience developing in an Agile development environment, and comfortable with Agile terminology and ceremonies. Stay up to date with the latest data technologies and trends. What We Expect Of You We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master’s degree with 4 - 6 years of experience in Computer Science, IT or related field OR Bachelor’s degree with 6 - 8 years of experience in Computer Science, IT or related field OR Diploma with 10 - 12 years of experience in Computer Science, IT or related field Must-Have Skills: Solid understanding of architecting Veeva Vault Platforms/Products Strong knowledge of Data Lake technologies like Databricks, and etc. Experience in Mulesoft, Python script and REST API script development Extensive knowledge of enterprise architecture frameworks, technologies and methodologies Experience with system integration and IT infrastructure Experience with data, change, and technology governance processes on the platform level Experience working in agile methodology, including Product Teams and Product Development models Proficiency in designing scalable, secure, and cost-effective solutions. Have collaborator and team management skills Have the ability to lead and guide multiple teams to meet business needs and goals Good-to-Have Skills: Good Knowledge of Global Pharmaceutical Industry Understanding of GxP process Strong solution design and problem-solving skills Solid understanding of technology, function, or platform Experience in developing differentiated and deliverable solutions Ability to analyze client requirements and translate them into solutions Working late hours Professional Certifications: Veeva Vault Platform Administrator (mandatory) SAFe – DevOps Practitioner (preferred) SAFe for teams (preferred) Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated awareness of presentation skills Shift Information: This position requires you to work a later shift and may be assigned a second or third shift schedule. Candidates must be willing and able to work during evening or night shifts, as required based on business requirements. What You Can Expect Of Us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. Show more Show less
Posted 2 weeks ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Company Overview At ReKnew, our mission is to empower enterprises to revitalize their core business and organization by positioning themselves for the new world of AI. We're a startup founded by seasoned practitioners, supported by expert advisors, and built on decades of experience in enterprise technology, data, analytics, AI, digital, and automation across diverse industries. We're actively seeking top talent to join us in this mission. Job Description We're seeking a highly skilled Senior Data Engineer with deep expertise in AWS-based data solutions. In this role, you'll be responsible for designing, building, and optimizing large-scale data pipelines and frameworks that power analytics and machine learning workloads. You'll lead the modernization of legacy systems by migrating workloads from platforms like Teradata to AWS-native big data environments such as EMR, Glue, and Redshift. A strong emphasis is placed on reusability, automation, observability, performance optimization, and managing schema evolution in dynamic data lake environments . Key Responsibilities Migration & Modernization: Build reusable accelerators and frameworks to migrate data from legacy platforms (e.g., Teradata) to AWS-native architectures such as EMR, Glue, and Redshift. Data Pipeline Development: Design and implement robust ETL/ELT pipelines using Python, PySpark, and SQL on AWS big data platforms. Code Quality & Testing: Drive development standards with test-driven development (TDD), unit testing, and automated validation of data pipelines. Monitoring & Observability: Build operational tooling and dashboards for pipeline observability, including tracking key metrics like latency, throughput, data quality, and cost. Cloud-Native Engineering: Architect scalable, secure data workflows using AWS services such as Glue, Lambda, Step Functions, S3, and Athena. Collaboration: Partner with internal product teams, data scientists, and external stakeholders to clarify requirements and drive solutions aligned with business goals. Architecture & Integration: Work with enterprise architects to evolve data architecture while securely integrating AWS systems with on-premise or hybrid environments. This includes strategic adoption of data lake table formats like Delta Lake, Apache Iceberg, or Apache Hudi for schema management and ACID capabilities. ML Support & Experimentation: Enable data scientists to operationalize machine learning models by providing clean, well-governed datasets at scale. Documentation & Enablement: Document solutions thoroughly and provide technical guidance and knowledge sharing to internal engineering teams. Team Training & Mentoring: Act as a subject matter expert, providing guidance, training, and mentorship to junior and mid-level data engineers, fostering a culture of continuous learning and best practices within the team. Qualifications Experience: 7+ years in technology roles, with at least 5+ years specifically in data engineering, software development, and distributed systems. Programming: Expert in Python and PySpark (Scala is a plus). Deep understanding of software engineering best practices. AWS Expertise: 3+ years of hands-on experience in the AWS data ecosystem. Proficient in AWS Glue, S3, Redshift, EMR, Athena, Step Functions, and Lambda. Experience with AWS Lake Formation and data cataloging tools is a plus. AWS Data Analytics or Solutions Architect certification is a strong plus. Big Data & MPP Systems: Strong grasp of distributed data processing. Experience with MPP data warehouses like Redshift, Snowflake, or Databricks on AWS. Hands-on experience with Delta Lake, Apache Iceberg, or Apache Hudi for building reliable data lakes with schema evolution, ACID transactions, and time travel capabilities. DevOps & Tooling: Experience with version control (e.g., GitHub/CodeCommit) and CI/CD tools (e.g., CodePipeline, Jenkins). Familiarity with containerization and deployment in Kubernetes or ECS. Data Quality & Governance: Experience with data profiling, data lineage, and relevant tools. Understanding of metadata management and data security best practices. Bonus: Experience supporting machine learning or data science workflows. Familiarity with BI tools such as QuickSight, PowerBI, or Tableau. Show more Show less
Posted 2 weeks ago
4.0 - 6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. What You Will Do About the role You will play a key role as part of Operations Generative AI (GenAI) Product team to deliver cutting edge innovative GEN AI solutions across various Process Development functions(Drug Substance, Drug Product, Attribute Sciences & Combination Products) in Operations functions. Role Description: The Sr Data Engineer for GEN AI solutions across various Process Development functions(Drug Substance, Drug Product, Attribute Sciences & Combination Products) in Operations functions is responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions, working with large datasets, developing reports, supporting and implementing data governance initiatives and visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes. Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing. Be a key team member that assists in design and development of the data pipeline. Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems. Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions. Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks. Collaborate with multi-functional teams to understand data requirements and design solutions that meet business needs. Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency. Implement data security and privacy measures to protect sensitive data. Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions. Develop solutions for handling unstructured data in AI pipelines. Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions. Identify and resolve complex data-related challenges. Adhere to standard processes for coding, testing, and designing reusable code/component. Explore new tools and technologies that will help to improve ETL platform performance. Participate in sprint planning meetings and provide estimations on technical implementation. Collaborate and communicate effectively with product teams. What We Expect Of You We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master’s degree with 4 - 6 years of experience in Computer Science, IT or related field OR Bachelor’s degree with 6 - 8 years of experience in Computer Science, IT or related field OR Diploma with 10 - 12 years of experience in Computer Science, IT or related field. Must-Have Skills: Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing. Proficiency in data analysis tools (eg. SQL) and experience with data visualization tools. Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Excellent problem-solving skills and the ability to work with large, complex datasets. Strong understanding of data governance frameworks, tools, and standard methodologies. Experience in implementing Retrieval-Augmented Generation (RAG) pipelines, integrating retrieval mechanisms with language models. Strong programming skills in Python and familiarity with deep learning frameworks such as PyTorch or TensorFlow. Experience in processing and leveraging unstructured data for GenAI applications Preferred Qualifications: Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development. Strong understanding of data modeling, data warehousing, and data integration concepts. Knowledge of Python/R, Databricks. Knowledge of vector databases, including implementation and optimization. Professional Certifications: Certified Data Engineer / Data Analyst (preferred on Databricks or cloud environments). Machine Learning Certification (preferred on Databricks or Cloud environments). SAFe for Teams certification (preferred). Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals What You Can Expect Of Us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Equal opportunity statement Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. Show more Show less
Posted 2 weeks ago
4.0 - 6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
oin Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. What You Will Do Let’s do this. Let’s change the world. In this vital role you will be responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and driving data governance initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing Be a key team member that assists in design and development of the data pipeline Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Collaborate with multi-functional teams to understand data requirements and design solutions that meet business needs Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate and communicate effectively with product teams Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve complex data-related challenges Adhere to standard methodologies for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation What We Expect Of You We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Doctorate degree OR Master’s degree and 4 to 6 years of Computer Science, IT or related field OR Bachelor’s degree and 6 to 8 years of Computer Science, IT or related field OR Diploma and 10 to 12 years of Computer Science, IT or related field Preferred Qualifications: Functional Skills: Must-Have Skills Proficiency in Python, PySpark, and Scala for data processing and ETL (Extract, Transform, Load) workflows, with hands-on experience in using Databricks for building ETL pipelines and handling big data processing Experience with data warehousing platforms such as Amazon Redshift, or Snowflake. Strong knowledge of SQL and experience with relational (e.g., PostgreSQL, MySQL) databases. Familiarity with big data frameworks like Apache Hadoop, Spark, and Kafka for handling large datasets. Experienced with software engineering best-practices, including but not limited to version control (GitLab, Subversion, etc.), CI/CD (Jenkins, GITLab etc.), automated unit testing, and Dev Ops Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA) Good-to-Have Skills: Experience with cloud platforms such as AWS particularly in data services (e.g., EKS, EC2, S3, EMR, RDS, Redshift/Spectrum, Lambda, Glue, Athena) Strong understanding of data modeling, data warehousing, and data integration concepts Understanding of machine learning pipelines and frameworks for ML/AI models Professional Certifications (please mention if the certification is preferred or required for the role): AWS Certified Data Engineer (preferred) Databricks Certified (preferred) Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills Equal opportunity statement Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation. What You Can Expect Of Us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now for a career that defies imagination Objects in your future are closer than they appear. Join us. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. Show more Show less
Posted 2 weeks ago
40.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Amgen Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. About The Role Role Description: The role is responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data governance initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing Be a key team member that assists in design and development of the data pipeline Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate and communicate effectively with product teams Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation Basic Qualifications and Experience: Master’s degree and 1 to 3 years of Computer Science, IT or related field experience OR Bachelor’s degree and 3 to 5 years of Computer Science, IT or related field experience OR Diploma and 7 to 9 years of Computer Science, IT or related field experience Functional Skills: Must-Have Skills Proficiency in Python, PySpark, and Scala for data processing and ETL (Extract, Transform, Load) workflows, with hands-on experience in using Databricks for building ETL pipelines and handling big data processing Experience with data warehousing platforms such as Amazon Redshift, or Snowflake. Strong knowledge of SQL and experience with relational (e.g., PostgreSQL, MySQL) databases. Familiarity with big data frameworks like Apache Hadoop, Spark, and Kafka for handling large datasets. Experienced with software engineering best-practices, including but not limited to version control (GitLab, Subversion, etc.), CI/CD (Jenkins, GITLab etc.), automated unit testing, and Dev Ops Good-to-Have Skills: Experience with cloud platforms such as AWS particularly in data services (e.g., EKS, EC2, S3, EMR, RDS, Redshift/Spectrum, Lambda, Glue, Athena) Experience with Anaplan platform, including building, managing, and optimizing models and workflows including scalable data integrations Understanding of machine learning pipelines and frameworks for ML/AI models Professional Certifications: AWS Certified Data Engineer (preferred) Databricks Certified (preferred) Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation. Show more Show less
Posted 2 weeks ago
1.0 - 3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. What You Will Do Role Description: The role is responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing data pipelines, supporting and executing back-end web development, and visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes. Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing Be a key team member that assists in the design and development of data pipelines used for reports and/or back-end web application development Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate and communicate effectively with product teams Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve complex data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation What We Expect Of You We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master’s degree and 1 to 3 years of Computer Science, IT or related field experience OR Bachelor’s degree and 3 to 5 years of Computer Science, IT or related field experience OR Diploma and 7 to 9 years of Computer Science, IT or related field experience Must-Have Skills: Hands-on experience in data engineering technologies such as Databricks, PySpark, SparkSQL Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies Proficiency in workflow orchestration, performance tuning on big data processing Strong understanding of data modeling, data warehousing, and data integration concepts Strong understanding of AWS services Excellent problem-solving skills and the ability to work with large, complex datasets Strong understanding of data governance frameworks, tools, and best practices. Preferred Qualifications: Data Engineering experience in Biotechnology or pharma industry Experienced with SQL/NOSQL database, vector database for large language models Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Professional Certifications: Certified Data Engineer (preferred on Databricks or cloud environments) Certified SAFe Agilist (preferred) Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What You Can Expect Of Us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Equal opportunity statement Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. Show more Show less
Posted 2 weeks ago
3.0 years
0 Lacs
New Delhi, Delhi, India
Remote
At Mindrift, innovation meets opportunity. We believe in using the power of collective intelligence to ethically shape the future of AI. What We Do The Mindrift platform connects specialists with AI projects from major tech innovators. Our mission is to unlock the potential of Generative AI by tapping into real-world expertise from across the globe. About The Role GenAI models are improving very quickly, and one of our goals is to make them capable of addressing specialized questions and achieving complex reasoning skills. If you join the platform as an AI Tutor in Coding, you'll have the opportunity to collaborate on these projects. Although every project is unique, you might typically: Analyze and understand existing code in Python or C/C++ Migrate logic to idiomatic, safe Rust while preserving functionality Adapt or port the test suite and ensure behavioral equivalence Document migration steps and technical decisions How To Get Started Simply apply to this post, qualify, and get the chance to contribute to projects aligned with your skills, on your own schedule. From creating training prompts to refining model responses, you'll help shape the future of AI while ensuring technology benefits everyone. Requirements You have a Bachelor's or Master's degree in Software Development, Computer Science, or other related fields. You have at least 3 years of professional experience with Python and 1+ year of hands-on experience with Rust You are experienced with PyO3/maturin for Python-Rust interoperability as well as with automated testing (unit/integration) and benchmarking You bring knowledge of Docker, Kubernetes, and CI/CD for hybrid Python-Rust apps You demonstrate solid understanding of systems programming (memory management, concurrency) Prompt engineering experience is a strong plus Your level of English is advanced (C1) or above You are ready to learn new methods, able to switch between tasks and topics quickly and sometimes work with challenging, complex guidelines Our freelance role is fully remote so, you just need a laptop, internet connection, time available and enthusiasm to take on a challenge Benefits Why this freelance opportunity might be a great fit for you? Take part in a part-time, remote, freelance project that fits around your primary professional or academic commitments Work on advanced AI projects and gain valuable experience that enhances your portfolio Influence how future AI models understand and communicate in your field of expertise Show more Show less
Posted 2 weeks ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
At Mindrift, innovation meets opportunity. We believe in using the power of collective intelligence to ethically shape the future of AI. What We Do The Mindrift platform connects specialists with AI projects from major tech innovators. Our mission is to unlock the potential of Generative AI by tapping into real-world expertise from across the globe. About The Role GenAI models are improving very quickly, and one of our goals is to make them capable of addressing specialized questions and achieving complex reasoning skills. If you join the platform as an AI Tutor in Coding, you'll have the opportunity to collaborate on these projects. Although every project is unique, you might typically: Analyze and understand existing code in Python or C/C++ Migrate logic to idiomatic, safe Rust while preserving functionality Adapt or port the test suite and ensure behavioral equivalence Document migration steps and technical decisions How To Get Started Simply apply to this post, qualify, and get the chance to contribute to projects aligned with your skills, on your own schedule. From creating training prompts to refining model responses, you'll help shape the future of AI while ensuring technology benefits everyone. Requirements You have a Bachelor's or Master's degree in Software Development, Computer Science, or other related fields. You have at least 3 years of professional experience with Python and 1+ year of hands-on experience with Rust You are experienced with PyO3/maturin for Python-Rust interoperability as well as with automated testing (unit/integration) and benchmarking You bring knowledge of Docker, Kubernetes, and CI/CD for hybrid Python-Rust apps You demonstrate solid understanding of systems programming (memory management, concurrency) Prompt engineering experience is a strong plus Your level of English is advanced (C1) or above You are ready to learn new methods, able to switch between tasks and topics quickly and sometimes work with challenging, complex guidelines Our freelance role is fully remote so, you just need a laptop, internet connection, time available and enthusiasm to take on a challenge Benefits Why this freelance opportunity might be a great fit for you? Take part in a part-time, remote, freelance project that fits around your primary professional or academic commitments Work on advanced AI projects and gain valuable experience that enhances your portfolio Influence how future AI models understand and communicate in your field of expertise Show more Show less
Posted 2 weeks ago
5.0 - 10.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Greetings from TCS! TCS is conducting in-person interview drive for Bangalore location. Date of in-person interview: 07th June 2025 Reporting Time: 9:00 AM In-person interview location: TCS - Bangalore (Bhuwalka) Role: React JS/Next JS Developer Exp Range: 5-10 Years Venue: TATA Consultancy Services Ltd, Brigade Bhuwalka Icon,No-1 ITPL Main Road, Pattandur Agrahara, Whitefield, Bengaluru, Karnataka 560066 JD Must Have: Good exposure and hand on experience in React JS/Gatsby / Next JS Able to work independently in building website componenet in a multi vendor, Multi platform environment Expereice in web development in Content Management Systems with Headless architecture Exposure to Gatsby Framework / Storybook in building UI components for Website assembly Agile project delivery experience and good communication skills, client management Good to have: Knowledge about Drupal and Acquia Cloud setup Knowledge about Gatsby Cloud Website creation and rollout in Factory model Good understanding of to GraphQL, TypeScript, ES6, JavaScript Exposure to Contentful CMS systems Responsibility: Senior developer who can independently contribute towards the development of frontend UI components in Gatsby/Stroybook framework. Project is an enhancement ( big scale) from an existing multi channel platform to omni channel model. Need to understand the existing set up, ideate the approach to migrate to a headless architecture and build the UI components in React/Gatsby/Storybook/Next JS for new site assembly. Associate will work on muti vendor scrum team where in team will jointly groom the user stories and deliver the sprint goals Documenttion wherever necessary and derive best practices in the process. Define the interface design for the external connection ( example: user authentication) Lead Proof of concepts on new technical requirements Customer is based out of Germany & US. Flexibility to work in US time zone on need basis Nb: No ex-tcsers will be considered Candidates who could attend in person drive at TCS - Bangalore(Bhuwalka) office should only need to apply. Candidates with relevant experience will only be considered for the role. Show more Show less
Posted 2 weeks ago
3.0 years
0 Lacs
India
Remote
At Mindrift, innovation meets opportunity. We believe in using the power of collective intelligence to ethically shape the future of AI. What We Do The Mindrift platform connects specialists with AI projects from major tech innovators. Our mission is to unlock the potential of Generative AI by tapping into real-world expertise from across the globe. About The Role GenAI models are improving very quickly, and one of our goals is to make them capable of addressing specialized questions and achieving complex reasoning skills. If you join the platform as an AI Tutor in Coding, you'll have the opportunity to collaborate on these projects. Although every project is unique, you might typically: Analyze and understand existing code in Python or C/C++ Migrate logic to idiomatic, safe Rust while preserving functionality Adapt or port the test suite and ensure behavioral equivalence Document migration steps and technical decisions How To Get Started Simply apply to this post, qualify, and get the chance to contribute to projects aligned with your skills, on your own schedule. From creating training prompts to refining model responses, you'll help shape the future of AI while ensuring technology benefits everyone. Requirements You have a Bachelor's or Master's degree in Software Development, Computer Science, or other related fields. You have at least 3 years of professional experience with Python and 1+ year of hands-on experience with Rust You are experienced with PyO3/maturin for Python-Rust interoperability as well as with automated testing (unit/integration) and benchmarking You bring knowledge of Docker, Kubernetes, and CI/CD for hybrid Python-Rust apps You demonstrate solid understanding of systems programming (memory management, concurrency) Prompt engineering experience is a strong plus Your level of English is advanced (C1) or above You are ready to learn new methods, able to switch between tasks and topics quickly and sometimes work with challenging, complex guidelines Our freelance role is fully remote so, you just need a laptop, internet connection, time available and enthusiasm to take on a challenge Benefits Why this freelance opportunity might be a great fit for you? Take part in a part-time, remote, freelance project that fits around your primary professional or academic commitments Work on advanced AI projects and gain valuable experience that enhances your portfolio Influence how future AI models understand and communicate in your field of expertise Show more Show less
Posted 2 weeks ago
3.0 years
0 Lacs
Mumbai Metropolitan Region
Remote
At Mindrift, innovation meets opportunity. We believe in using the power of collective intelligence to ethically shape the future of AI. What We Do The Mindrift platform connects specialists with AI projects from major tech innovators. Our mission is to unlock the potential of Generative AI by tapping into real-world expertise from across the globe. About The Role GenAI models are improving very quickly, and one of our goals is to make them capable of addressing specialized questions and achieving complex reasoning skills. If you join the platform as an AI Tutor in Coding, you'll have the opportunity to collaborate on these projects. Although every project is unique, you might typically: Analyze and understand existing code in Python or C/C++ Migrate logic to idiomatic, safe Rust while preserving functionality Adapt or port the test suite and ensure behavioral equivalence Document migration steps and technical decisions How To Get Started Simply apply to this post, qualify, and get the chance to contribute to projects aligned with your skills, on your own schedule. From creating training prompts to refining model responses, you'll help shape the future of AI while ensuring technology benefits everyone. Requirements You have a Bachelor's or Master's degree in Software Development, Computer Science, or other related fields. You have at least 3 years of professional experience with Python and 1+ year of hands-on experience with Rust You are experienced with PyO3/maturin for Python-Rust interoperability as well as with automated testing (unit/integration) and benchmarking You bring knowledge of Docker, Kubernetes, and CI/CD for hybrid Python-Rust apps You demonstrate solid understanding of systems programming (memory management, concurrency) Prompt engineering experience is a strong plus Your level of English is advanced (C1) or above You are ready to learn new methods, able to switch between tasks and topics quickly and sometimes work with challenging, complex guidelines Our freelance role is fully remote so, you just need a laptop, internet connection, time available and enthusiasm to take on a challenge Benefits Why this freelance opportunity might be a great fit for you? Take part in a part-time, remote, freelance project that fits around your primary professional or academic commitments Work on advanced AI projects and gain valuable experience that enhances your portfolio Influence how future AI models understand and communicate in your field of expertise Show more Show less
Posted 2 weeks ago
5.0 - 8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Details: Job Description Stefanini Group is a multinational company with a global presence in 41 countries and 44 languages, specializing in technological solutions. We believe in digital innovation and agility to transform businesses for a better future. Our diverse portfolio includes consulting, marketing, mobility, AI services, service desk, field service, and outsourcing solutions. We are seeking a skilled VMware Consultant to configure Vmware environments on Google Cloud Vmware Engine (GCVE) and migrate applications from the customers on premise environment to GCVE. The ideal candidate will have substantial experience in VMware virtualization technologies and a strong background in IT infrastructure and cloud (ideally Google Cloud Platform). You will be responsible for providing technical expertise, performing hands-on configuration, hands on VM migration, troubleshooting & problem management, and delivering high-quality solutions that align with our clients' business needs. What You Will Be Responsible For Infrastructure Design and Implementation: Design, implement, and manage VMware vSphere, vCenter, and other VMware products and solutions to support virtualization requirements. Assess existing infrastructure, recommend best practices, and ensure that VMware solutions align with business goals. Project Planning and Execution: Participate in client discussions to gather requirements, create design documents, and assist in the development of project plans. Manage project timelines and deliverables, coordinating with client teams to ensure a smooth implementation process. System Monitoring and Optimization: Monitor virtualized environments to ensure optimal performance, availability, and security. Conduct regular health checks, performance tuning, and capacity planning of VMware solutions. Development of Vmware runbooks for ongoing Vmware operations on GCVE Troubleshooting and Support: Diagnose and resolve VMware-related issues, working with clients to address escalated support cases. Provide expert guidance and support during critical incidents and outages. Vmware Migration Hands on experience with Vmware mass migration - preferable mass migrations from Vmware on premise to GCP or AWS. Demonstrated hands on technical expertise migrating VM's using the following Vmware technologies; Vmotion NSX HCX VMware SDDC NSX-T HCX vROps Documentation and Knowledge Sharing: Develop and maintain detailed technical documentation, including design specifications, operating procedures, and troubleshooting guides. Conduct training sessions for client IT staff as needed. We are seeking a skilled VMware Consultant to configure Vmware environments on Google Cloud Vmware Engine (GCVE) and migrate applications from the customers on premise environment to GCVE. The ideal candidate will have substantial experience in VMware virtualization technologies and a strong background in IT infrastructure and cloud (ideally Google Cloud Platform). You will be responsible for providing technical expertise, performing hands-on configuration, hands on VM migration, troubleshooting & problem management, and delivering high-quality solutions that align with our clients' business needs. Job Requirements Details: What you will be responsible for Infrastructure Design and Implementation: Design, implement, and manage VMware vSphere, vCenter, and other VMware products and solutions to support virtualization requirements. Assess existing infrastructure, recommend best practices, and ensure that VMware solutions align with business goals. Project Planning and Execution: Participate in client discussions to gather requirements, create design documents, and assist in the development of project plans. Manage project timelines and deliverables, coordinating with client teams to ensure a smooth implementation process. System Monitoring and Optimization: Monitor virtualized environments to ensure optimal performance, availability, and security. Conduct regular health checks, performance tuning, and capacity planning of VMware solutions. Development of Vmware runbooks for ongoing Vmware operations on GCVE Troubleshooting and Support: Diagnose and resolve VMware-related issues, working with clients to address escalated support cases. Provide expert guidance and support during critical incidents and outages. Vmware Migration Hands on experience with Vmware mass migration - preferable mass migrations from Vmware on premise to GCP or AWS. Demonstrated hands on technical expertise migrating VM's using the following Vmware technologies; Vmotion NSX HCX VMware SDDC NSX-T HCX vROps Documentation and Knowledge Sharing: Develop and maintain detailed technical documentation, including design specifications, operating procedures, and troubleshooting guides. Conduct training sessions for client IT staff as needed. What You Will Bring To The Role Proven track record of enterprise IT consulting experience Bachelor's degree in Computer Science, Information Technology, or a related field. VMware Certified Professional (VCP) or higher certification (VCAP, VCDX) is highly desirable. 5-8 years of hands-on experience in VMware environments, including; Active Directory HCX NSX NSX-T SRM Vmotion vSphere vCenter vSAN vROps vRLI Vmotion Vmware SDDC Strong understanding of IT infrastructure components, including storage, networking, and security as it relates to virtualization. Experience with automation and scripting tools such as PowerCLI, Ansible, or Terraform. Familiarity with backup, recovery, and disaster recovery strategies within VMware environments. Excellent communication and client-facing skills. Proven troubleshooting skills and the ability to handle complex issues under pressure. Knowledge of cloud platforms (AWS, Azure, Google Cloud) and hybrid cloud architectures. Familiarity with Linux/Windows server administration and Active Directory integration with VMware. Migration covering HCX and ideally on-premises to cloud migration experience bringing production workloads across Troubleshooting and problem solving production incidents together with workload teams Desirable Skills Experience deploying and managing public cloud migrations from VMware Experience working in the financial services sector Professional and/or Specialty level cloud certs Experience in migration of Oracle databases on Vmware is highly desirable Show more Show less
Posted 2 weeks ago
4.0 - 6.0 years
3 - 9 Lacs
Hyderābād
On-site
India - Hyderabad JOB ID: R-215866 LOCATION: India - Hyderabad WORK LOCATION TYPE: On Site DATE POSTED: May. 30, 2025 CATEGORY: Information Systems Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Sr. Data Engineer – Veeva Integration Lead What you will do Let’s do this. Let’s change the world. In this vital role you will be responsible for developing and maintaining the overall data architecture and integration of Amgen’s Veeva Vault Platform. This role involves defining the data integrations vision, creating roadmaps, and ensuring that IT strategies align with business goals. The role will be working closely with collaborators to understand requirements, develop data integration blueprints, and ensure that solutions are scalable, secure, and aligned with enterprise standards. The role will be involved in defining the Veeva Vault Platform data integration, guiding technology decisions, and ensuring that all implementations adhere to established architectural principles, Veeva’s and the Industry’s standard methodologies. Collaborate with broader collaborator community with their data needs, including data quality, data access controls, compliance with privacy and security regulations. Works with Enterprise MDM and Reference Data to implement standards and data reusability. Contribute and support consistency to data governance principles. Maintain documentation on data definitions, data flows, common data models, data harmonization etc. Partner with business teams to identify compliance requirements with data privacy, security, and regulatory policies for the assigned domains Build strong relationships with key business leads and partners to ensure their needs are met. Be a key team member that assists in design and development of the data pipeline for Veeva Vault platform Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Identify and resolve complex data-related challenges Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation Work with data engineers on data quality assessment, data cleansing and data analytics Share and discuss findings with team members practicing SAFe Agile delivery model Automate and Optimize data pipeline and framework for easier and cost-effective development process. Advice and support project teams (project managers, architects, business analysts, and developers) on cloud platforms (AWS, Databricks preferred), tools, technology, and methodology related to the design, build scalable, efficient and maintain Data Lake and other Big Data solutions Experience developing in an Agile development environment, and comfortable with Agile terminology and ceremonies. Stay up to date with the latest data technologies and trends. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master’s degree with 4 - 6 years of experience in Computer Science, IT or related field OR Bachelor’s degree with 6 - 8 years of experience in Computer Science, IT or related field OR Diploma with 10 - 12 years of experience in Computer Science, IT or related field Must-Have Skills: Solid understanding of architecting Veeva Vault Platforms/Products Strong knowledge of Data Lake technologies like Databricks, and etc. Experience in Mulesoft, Python script and REST API script development Extensive knowledge of enterprise architecture frameworks, technologies and methodologies Experience with system integration and IT infrastructure Experience with data, change, and technology governance processes on the platform level Experience working in agile methodology, including Product Teams and Product Development models Proficiency in designing scalable, secure, and cost-effective solutions. Have collaborator and team management skills Have the ability to lead and guide multiple teams to meet business needs and goals Good-to-Have Skills: Good Knowledge of Global Pharmaceutical Industry Understanding of GxP process Strong solution design and problem-solving skills Solid understanding of technology, function, or platform Experience in developing differentiated and deliverable solutions Ability to analyze client requirements and translate them into solutions Working late hours Professional Certifications: Veeva Vault Platform Administrator (mandatory) SAFe – DevOps Practitioner (preferred) SAFe for teams (preferred) Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated awareness of presentation skills Shift Information: This position requires you to work a later shift and may be assigned a second or third shift schedule. Candidates must be willing and able to work during evening or night shifts, as required based on business requirements. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 2 weeks ago
4.0 - 6.0 years
0 Lacs
Hyderābād
On-site
India - Hyderabad JOB ID: R-215967 LOCATION: India - Hyderabad WORK LOCATION TYPE: On Site DATE POSTED: May. 30, 2025 CATEGORY: Information Systems Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. What you will do About the role You will play a key role as part of Operations Generative AI (GenAI) Product team to deliver cutting edge innovative GEN AI solutions across various Process Development functions(Drug Substance, Drug Product, Attribute Sciences & Combination Products) in Operations functions. Role Description: The Sr Data Engineer for GEN AI solutions across various Process Development functions(Drug Substance, Drug Product, Attribute Sciences & Combination Products) in Operations functions is responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions, working with large datasets, developing reports, supporting and implementing data governance initiatives and visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes. Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing. Be a key team member that assists in design and development of the data pipeline. Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems. Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions. Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks. Collaborate with multi-functional teams to understand data requirements and design solutions that meet business needs. Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency. Implement data security and privacy measures to protect sensitive data. Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions. Develop solutions for handling unstructured data in AI pipelines. Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions. Identify and resolve complex data-related challenges. Adhere to standard processes for coding, testing, and designing reusable code/component. Explore new tools and technologies that will help to improve ETL platform performance. Participate in sprint planning meetings and provide estimations on technical implementation. Collaborate and communicate effectively with product teams. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master’s degree with 4 - 6 years of experience in Computer Science, IT or related field OR Bachelor’s degree with 6 - 8 years of experience in Computer Science, IT or related field OR Diploma with 10 - 12 years of experience in Computer Science, IT or related field. Must-Have Skills: Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing. Proficiency in data analysis tools (eg. SQL) and experience with data visualization tools. Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Excellent problem-solving skills and the ability to work with large, complex datasets. Strong understanding of data governance frameworks, tools, and standard methodologies. Experience in implementing Retrieval-Augmented Generation (RAG) pipelines, integrating retrieval mechanisms with language models. Strong programming skills in Python and familiarity with deep learning frameworks such as PyTorch or TensorFlow. Experience in processing and leveraging unstructured data for GenAI applications Preferred Qualifications: Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development. Strong understanding of data modeling, data warehousing, and data integration concepts. Knowledge of Python/R, Databricks. Knowledge of vector databases, including implementation and optimization. Professional Certifications: Certified Data Engineer / Data Analyst (preferred on Databricks or cloud environments). Machine Learning Certification (preferred on Databricks or Cloud environments). SAFe for Teams certification (preferred). Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Equal opportunity statement Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation. and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 2 weeks ago
0 years
4 - 10 Lacs
Hyderābād
Remote
Hyderabad, India Chennai, India Job ID: R-1075600 Apply prior to the end date: June 25th, 2025 When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What you'll be doing… As a Solution Architect with an in-depth knowledge of Web and Mobile application development. You will be expected to architect solutions for business projects, work with enterprise architects to align application & system architecture to enterprise strategy and deliver individually and/or with the help of a team. You need to have passion to learn and educate fellow associates/subordinates and guide them to follow best practices.Principal consultant to the team that develops, maintains and enhances the service delivery and management for VBG on Service Now. Architecting/Developing solutions for new user-facing and agent-facing experiences and features using Service Portal, Mobile, Virtual Agent and other self-service channels in Service Now. Strong knowledge on CSM, ITSM, CMDB, CSDM, ITOM, SecOps Engaging with Enterprise Architects on HLAs and defining new solutions that adhere to the VBG's NorthStar strategy. Design and develop workflows, sub-flows, business rules to orchestrate the various service assurance flows Planning and overseeing releases and deployment towards the various MVPs that are identified Monitoring operational metrics and taking actions to keep availability & reliability within the SLAs. Optimizing flows & components for maximum performance and efficiency. Understand the VBG service delivery eco-system and identify how to migrate all applicable capability into Service Now efficiently Guiding the team on best practices for efficient and streamlined delivery of software to production. Guiding teams on maintaining security posture and code quality of applications keeping the tech debt in check Identifying chronic production issues, pain points of customers by evaluating feedback and monitoring the NPS to maintain it above the required threshold Working with Quality Assurance, UAT & Production Support teams to support releases, troubleshoot progression/regression issues, integration & E2E testing and implement deliverables as per the targeted timelines. Continuously ramp up on domain knowledge in eCommerce, Sales, Self service, Billing, Reporting, etc and familiarize yourself not only with the VBG eco-system but other areas of Verizon business as well Work with infrastructure teams to implement DevOps capabilities that help streamline the CICD process. Leverage innovative technologies to build proof-of-concepts that hep build customer experiences, reduce pain points in the current experience, provide delight factor to customers. Where you’ll be working… In this hybrid role, you will have a defined work location that includes work from home and assigned office days set by your manager. What we're looking for… You view technology through a lens of making things better and more effective. Understanding and building continual improvements to the digital value chain is something you flourish with. You enjoy the process of solving complex issues while empowering the team around you to do the same. People count on you to have strong domain experience in eCommerce application tools, digital self service, billing, reporting , digital delivery methods and all aspects of production operational excellence. You’ll need to have: Bachelor’s degree or four or more years of work experience. Six or more years of relevant work experience. Experience developing and optimizing server architectures. Experience with Service Now especially on modules of ITSM and TSM, Virtual Agent, Service Portal, Mobile App development Experience with Service Now on IT Operations workspace, Data segregation, Employee Center, etc Experience with ELK stack, New Relic, IBM MQs, Rabbit MQs, Kafka. Excellent database skills, proficient in backends like Oracle, with strong working knowledge of SQL, PL-SQL. Knowledge and experience with SRE practice. Knowledge and experience with DevOps and automation. Even better if you have one or more of the following: A Master's degree. Experience with Service Now Cognitive capabilities and use of AIML Experience with Service Now business intelligence and reporting features like Performance Analytics and Dashboarding Experience with Systems Design and Project Management. Experience with RPA & Bots and Integration Hub. Certifications in SRE, Cloud technologies, DevOps, Agile. If Verizon and this role sound like a fit for you, we encourage you to apply even if you don’t meet every “even better” qualification listed above. Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics. Apply Now Save Saved Open sharing options Share Related Jobs Distinguish Engineer-Software Development Save Hyderabad, India, +1 other location Technology Engineer III Specialist-DevOps Save Hyderabad, India Technology Principal Engineer- Java Business Process Model(JBPM) Save Miami, Florida, +6 other locations Technology Shaping the future. Connect with the best and brightest to help innovate and operate some of the world’s largest platforms and networks.
Posted 2 weeks ago
1.0 - 3.0 years
3 - 9 Lacs
Hyderābād
On-site
India - Hyderabad JOB ID: R-216265 LOCATION: India - Hyderabad WORK LOCATION TYPE: On Site DATE POSTED: May. 30, 2025 CATEGORY: Information Systems Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Data Engineer What you will do Let’s do this. Let’s change the world. In this vital role you will be responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and performing data governance initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has deep technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes. Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing Be a crucial team member that assists in design and development of the data pipeline Build data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate and communicate effectively with product teams Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast-paced business needs across geographic regions Identify and resolve complex data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master’s degree and 1 to 3 years of Computer Science, IT or related field experience OR Bachelor’s degree and 3 to 5 years of Computer Science, IT or related field experience OR Diploma and 7 to 9 years of Computer Science, IT or related field experience Preferred Qualifications: Must-Have Skills: Hands-on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing Proficiency in data analysis tools (eg. SQL) and experience with data visualization tools Excellent problem-solving skills and the ability to work with large, complex datasets Solid understanding of data governance frameworks, tools, and best practices. Knowledge of data protection regulations and compliance requirements Good-to-Have Skills: Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development Good understanding of data modeling, data warehousing, and data integration concepts Knowledge of Python/R, Databricks, SageMaker, cloud data platforms Professional Certifications Certified Data Engineer / Data Analyst (preferred on Databricks or cloud environments) Soft Skills: Excellent critical-thinking and problem-solving skills Good communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 2 weeks ago
3.0 years
0 Lacs
Hyderābād
On-site
Location Hyderabad, Telangana, India Category Technology Careers Job Id JREQ188875 Job Type Full time Hybrid Our Service Management function is transforming into a truly global, data and standards-driven organization, employing best-in-class tools and practices across all disciplines of Technology Operations. This will drive ever-greater stability and consistency of service across the technology estate as we drive towards optimal Customer and Employee experience. About the Role: Provides technical and procedural consistency within a team focused on the implementation, service delivery and support of products, systems & networks. Contributes to initiatives for driving down incident rates and working with first line operations (and operations engineers within specialist functional teams, working with 2nd line operations) to improve service recovery times. Adheres to the implementation process to ensure that all aspects of operability are delivered whilst ensuring that existing service levels are maintained or improved. Contributes to defining operational standards, procedures, and best practice. Operations Engineers within specialist functional teams may have compliance assurance responsibility. Works in close liaison with various operational, project, development and product teams as well working within the Service Organization to ensure ongoing service delivery and support can be maintained to agreed service levels. About You: 3+ years as a Database Analyst/Engineer Bachelor’s Degree preferred in the following subject areas : Computer Science, Information Technology or related Strong Proficiency in the following skills is a must: T-SQL, Integration, High Availability Solutions, Query Optimization AWS Experience (specifically designing, implementing, and/or supporting RDS and Aurora) On-call troubleshooting experience Experience monitoring and acting on critical production issues (Datadog exp would be optimal) History of researching and resolving requests Knowledge in reporting and query tools and practices Knowledge in data management and data processing. Knowledge in database systems infrastructure and the underlying hardware. Ability to present ideas in user-friendly language Self-motivated and directed, with keen attention to detail Able to prioritize and execute tasks in a high-pressure environment Experience working in a team-oriented and collaborative environment Excellent interpersonal, written and oral communication skills. Support 24x7 Operations/Production SQL Server 2016+ Monitor and act on production issues Enhance SQL monitoring process Review technical documentation for accuracy and completeness Automation through scripting, SSIS package & SSRS Report development Offer expertise and guidance for HA/DR strategies utilizing AGs, Mirroring, Log Shipping, etc. When performance issues arise, determine the most effective way to increase performance, server configuration changes, index/query changes Support scheduled and hotfix release deployments Assist in migrating databases and functionality to AWS and support the AWS environment Work with additional teams to migrate customer databases between environments (on prem to AWS, non-prod to prod, etc) #LI-KP2 What’s in it For You? Hybrid Work Model: We’ve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrow’s challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits: We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our values: Obsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact: Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. About Us Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound exciting? Join us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law.
Posted 2 weeks ago
5.0 years
0 Lacs
Vellore, Tamil Nadu, India
Remote
Experience : 5.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: Python, Postgre SQL, Snowflake, AWS RDS, BigQuery, OOPs, Monitoring tools, Prometheus, ETL tools, Data warehouse, Pandas, Pyspark, AWS Lambda Forbes Advisor is Looking for: Job Description: Data Research - Database Engineer Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. Position Overview At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Database Engineer/Developer will involve designing, developing, and maintaining a robust and secure database infrastructure to efficiently manage company data. They collaborate with cross-functional teams to understand data requirements and migrate data from spreadsheets or other sources to relational databases or cloud-based solutions like Google BigQuery and AWS. They develop import workflows and scripts to automate data import processes, optimize database performance, ensure data integrity, and implement data security measures. Their creativity in problem-solving and continuous learning mindset contribute to improving data engineering processes. Proficiency in SQL, database design principles, and familiarity with Python programming are key qualifications for this role. Responsibilities: Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale databases, and databases involving big data processing. Work on data security and compliance, by implementing access controls, encryption, and compliance standards. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to a relational database system (e.g., PostgreSQL, MySQL) or cloud-based solutions like Google BigQuery. Develop import workflows and scripts to automate the data import process and ensure data accuracy and consistency. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Work with the team to ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and identify and resolve issues. Collaborate with the full-stack web developer in the team to support the implementation of efficient data access and retrieval mechanisms. Implement data security measures to protect sensitive information and comply with relevant regulations. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Familiarize yourself with tools and technologies used in the team's workflow, such as Knime for data integration and analysis. Use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Madurai, Tamil Nadu, India
Remote
Experience : 5.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: Python, Postgre SQL, Snowflake, AWS RDS, BigQuery, OOPs, Monitoring tools, Prometheus, ETL tools, Data warehouse, Pandas, Pyspark, AWS Lambda Forbes Advisor is Looking for: Job Description: Data Research - Database Engineer Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. Position Overview At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Database Engineer/Developer will involve designing, developing, and maintaining a robust and secure database infrastructure to efficiently manage company data. They collaborate with cross-functional teams to understand data requirements and migrate data from spreadsheets or other sources to relational databases or cloud-based solutions like Google BigQuery and AWS. They develop import workflows and scripts to automate data import processes, optimize database performance, ensure data integrity, and implement data security measures. Their creativity in problem-solving and continuous learning mindset contribute to improving data engineering processes. Proficiency in SQL, database design principles, and familiarity with Python programming are key qualifications for this role. Responsibilities: Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale databases, and databases involving big data processing. Work on data security and compliance, by implementing access controls, encryption, and compliance standards. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to a relational database system (e.g., PostgreSQL, MySQL) or cloud-based solutions like Google BigQuery. Develop import workflows and scripts to automate the data import process and ensure data accuracy and consistency. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Work with the team to ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and identify and resolve issues. Collaborate with the full-stack web developer in the team to support the implementation of efficient data access and retrieval mechanisms. Implement data security measures to protect sensitive information and comply with relevant regulations. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Familiarize yourself with tools and technologies used in the team's workflow, such as Knime for data integration and analysis. Use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
Remote
Experience : 5.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: Python, Postgre SQL, Snowflake, AWS RDS, BigQuery, OOPs, Monitoring tools, Prometheus, ETL tools, Data warehouse, Pandas, Pyspark, AWS Lambda Forbes Advisor is Looking for: Job Description: Data Research - Database Engineer Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. Position Overview At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Database Engineer/Developer will involve designing, developing, and maintaining a robust and secure database infrastructure to efficiently manage company data. They collaborate with cross-functional teams to understand data requirements and migrate data from spreadsheets or other sources to relational databases or cloud-based solutions like Google BigQuery and AWS. They develop import workflows and scripts to automate data import processes, optimize database performance, ensure data integrity, and implement data security measures. Their creativity in problem-solving and continuous learning mindset contribute to improving data engineering processes. Proficiency in SQL, database design principles, and familiarity with Python programming are key qualifications for this role. Responsibilities: Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale databases, and databases involving big data processing. Work on data security and compliance, by implementing access controls, encryption, and compliance standards. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to a relational database system (e.g., PostgreSQL, MySQL) or cloud-based solutions like Google BigQuery. Develop import workflows and scripts to automate the data import process and ensure data accuracy and consistency. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Work with the team to ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and identify and resolve issues. Collaborate with the full-stack web developer in the team to support the implementation of efficient data access and retrieval mechanisms. Implement data security measures to protect sensitive information and comply with relevant regulations. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Familiarize yourself with tools and technologies used in the team's workflow, such as Knime for data integration and analysis. Use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Faridabad, Haryana, India
Remote
Experience : 5.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: Python, Postgre SQL, Snowflake, AWS RDS, BigQuery, OOPs, Monitoring tools, Prometheus, ETL tools, Data warehouse, Pandas, Pyspark, AWS Lambda Forbes Advisor is Looking for: Job Description: Data Research - Database Engineer Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. Position Overview At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Database Engineer/Developer will involve designing, developing, and maintaining a robust and secure database infrastructure to efficiently manage company data. They collaborate with cross-functional teams to understand data requirements and migrate data from spreadsheets or other sources to relational databases or cloud-based solutions like Google BigQuery and AWS. They develop import workflows and scripts to automate data import processes, optimize database performance, ensure data integrity, and implement data security measures. Their creativity in problem-solving and continuous learning mindset contribute to improving data engineering processes. Proficiency in SQL, database design principles, and familiarity with Python programming are key qualifications for this role. Responsibilities: Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale databases, and databases involving big data processing. Work on data security and compliance, by implementing access controls, encryption, and compliance standards. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to a relational database system (e.g., PostgreSQL, MySQL) or cloud-based solutions like Google BigQuery. Develop import workflows and scripts to automate the data import process and ensure data accuracy and consistency. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Work with the team to ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and identify and resolve issues. Collaborate with the full-stack web developer in the team to support the implementation of efficient data access and retrieval mechanisms. Implement data security measures to protect sensitive information and comply with relevant regulations. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Familiarize yourself with tools and technologies used in the team's workflow, such as Knime for data integration and analysis. Use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
Experience : 5.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: Python, Postgre SQL, Snowflake, AWS RDS, BigQuery, OOPs, Monitoring tools, Prometheus, ETL tools, Data warehouse, Pandas, Pyspark, AWS Lambda Forbes Advisor is Looking for: Job Description: Data Research - Database Engineer Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. Position Overview At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Database Engineer/Developer will involve designing, developing, and maintaining a robust and secure database infrastructure to efficiently manage company data. They collaborate with cross-functional teams to understand data requirements and migrate data from spreadsheets or other sources to relational databases or cloud-based solutions like Google BigQuery and AWS. They develop import workflows and scripts to automate data import processes, optimize database performance, ensure data integrity, and implement data security measures. Their creativity in problem-solving and continuous learning mindset contribute to improving data engineering processes. Proficiency in SQL, database design principles, and familiarity with Python programming are key qualifications for this role. Responsibilities: Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale databases, and databases involving big data processing. Work on data security and compliance, by implementing access controls, encryption, and compliance standards. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to a relational database system (e.g., PostgreSQL, MySQL) or cloud-based solutions like Google BigQuery. Develop import workflows and scripts to automate the data import process and ensure data accuracy and consistency. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Work with the team to ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and identify and resolve issues. Collaborate with the full-stack web developer in the team to support the implementation of efficient data access and retrieval mechanisms. Implement data security measures to protect sensitive information and comply with relevant regulations. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Familiarize yourself with tools and technologies used in the team's workflow, such as Knime for data integration and analysis. Use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The job market for migrate professionals in India is currently thriving, with numerous opportunities available in various industries. Whether you are just starting your career or looking to make a job transition, migrate roles can offer a rewarding career path with growth opportunities.
These cities are known for their booming IT sectors and have a high demand for migrate professionals.
The average salary range for migrate professionals in India varies based on experience levels. Entry-level positions can expect to earn around INR 3-5 lakhs per annum, while experienced professionals can command salaries upwards of INR 10-15 lakhs per annum.
A typical career path in the migrate field may involve starting as a Junior Developer, progressing to a Senior Developer, then moving up to a Tech Lead role. With experience and expertise, one could further advance to roles like Solution Architect or Project Manager.
In addition to migrate skills, professionals in this field are often expected to have knowledge in related areas such as cloud computing, database management, programming languages like Java or Python, and software development methodologies.
As you explore opportunities in the migrate job market in India, remember to showcase your skills and experience confidently during interviews. Prepare thoroughly, stay updated on industry trends, and demonstrate your passion for data migration. Best of luck on your job search journey!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.