Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
0 years
6 - 9 Lacs
Noida
On-site
Req ID: 299670 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Systems Integration Analyst to join our team in Noida, Uttar Pradesh (IN-UP), India (IN). Position General Duties and Tasks: Participate in research, design, implementation, and optimization of Machine learning Models Help AI product managers and business stakeholders understand the potential and limitations of AI when planning new products Understanding of Revenue Cycle Management processes like Claims filing and adjudication Hands on experience in Python Build data ingest and data transformation platform Identify transfer learning opportunities and new training datasets Build AI models from scratch and help product managers and stakeholders understand results Analysing the ML algorithms that could be used to solve a given problem and ranking them by their success probability Exploring and visualizing data to gain an understanding of it, then identifying differences in data distribution that could affect performance when deploying the model in the real world Verifying data quality, and/or ensuring it via data cleaning Supervising the data acquisition process if more data is needed Defining validation strategies Defining the pre-processing or feature engineering to be done on a given dataset Training models and tuning their hyperparameters Analysing the errors of the model and designing strategies to overcome them Deploying models to production Create APIs and help business customers put results of your AI models into operations JD Education Bachelor's in computer sciences or similar. Masters preferred. Skills hands on programming experience working on enterprise products Demonstrated proficiency in multiple programming languages with a strong foundation in a statistical platform such as Python, R, SAS, or MatLab. Knowledge in Deep Learning/Machine learning, Artificial Intelligence Experience in building AI models using algorithms of Classification & Clustering techniques Expertise in visualizing and manipulating big datasets Strong in MS SQL Acumen to take a complex problem and break it down to workable pieces, to code a solution Excellent verbal and written communication skills Ability to work in and define a fast pace and team focused environment Proven record of delivering and completing assigned projects and initiatives Ability to deploy large scale solutions to an enterprise estate Strong interpersonal skills Understanding of Revenue Cycle Management processes like Claims filing and adjudication is a plus About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com NTT DATA is an equal opportunity employer and considers all applicants without regarding to race, color, religion, citizenship, national origin, ancestry, age, sex, sexual orientation, gender identity, genetic information, physical or mental disability, veteran or marital status, or any other characteristic protected by law. We are committed to creating a diverse and inclusive environment for all employees. If you need assistance or an accommodation due to a disability, please inform your recruiter so that we may connect you with the appropriate team.
Posted 13 hours ago
5.0 years
0 Lacs
India
On-site
This posting is for one of our International Clients. About the Role We’re creating a new certification: Inside Gemini: Gen AI Multimodal and Google Intelligence (Google DeepMind) . This course is designed for technical learners who want to understand and apply the capabilities of Google’s Gemini models and DeepMind technologies to build powerful, multimodal AI applications. We’re looking for a Subject Matter Expert (SME) who can help shape this course from the ground up. You’ll work closely with a team of learning experience designers, writers, and other collaborators to ensure the course is technically accurate, industry-relevant, and instructionally sound. Responsibilities As the SME, you’ll partner with learning experience designers and content developers to: Translate real-world Gemini and DeepMind applications into accessible, hands-on learning for technical professionals. Guide the creation of labs and projects that allow learners to build pipelines for image-text fusion, deploy Gemini APIs, and experiment with DeepMind’s reinforcement learning libraries. Contribute technical depth across activities, from high-level course structure down to example code, diagrams, voiceover scripts, and data pipelines. Ensure all content reflects current, accurate usage of Google’s multimodal tools and services. Be available during U.S. business hours to support project milestones, reviews, and content feedback. This role is an excellent fit for professionals with deep experience in AI/ML, Google Cloud, and a strong familiarity with multimodal systems and the DeepMind ecosystem. Essential Tools & Platforms A successful SME in this role will demonstrate fluency and hands-on experience with the following: Google Cloud Platform (GCP) Vertex AI (particularly Gemini integration, model tuning, and multimodal deployment) Cloud Functions, Cloud Run (for inference endpoints) BigQuery and Cloud Storage (for handling large image-text datasets) AI Platform Notebooks or Colab Pro Google DeepMind Technologies JAX and Haiku (for neural network modeling and research-grade experimentation) DeepMind Control Suite or DeepMind Lab (for reinforcement learning demonstrations) RLax or TF-Agents (for building and modifying RL pipelines) AI/ML & Multimodal Tooling Gemini APIs and SDKs (image-text fusion, prompt engineering, output formatting) TensorFlow 2.x and PyTorch (for model interoperability) Label Studio, Cloud Vision API (for annotation and image-text preprocessing) Data Science & MLOps DVC or MLflow (for dataset and model versioning) Apache Beam or Dataflow (for processing multimodal input streams) TensorBoard or Weights & Biases (for visualization) Content Authoring & Collaboration GitHub or Cloud Source Repositories Google Docs, Sheets, Slides Screen recording tools like Loom or OBS Studio Required skills and experience: Demonstrated hands-on experience building, deploying, and maintaining sophisticated AI powered applications using Gemini APIs/SDKs within the Google Cloud ecosystem, especially in Firebase Studio and VS Code. Proficiency in designing and implementing agent-like application patterns, including multi-turn conversational flows, state management, and complex prompting strategies (e.g., Chain-of Thought, few-shot, zero-shot). Experience integrating Gemini with Google Cloud services (Firestore, Cloud Functions, App Hosting) and external APIs for robust, production-ready solutions. Proven ability to engineer applications that process, integrate, and generate content across multiple modalities (text, images, audio, video, code) using Gemini’s native multimodal capabilities. Skilled in building and orchestrating pipelines for multimodal data handling, synchronization, and complex interaction patterns within application logic. Experience designing and implementing production-grade RAG systems, including integration with vector databases (e.g., Pinecone, ChromaDB) and engineering data pipelines for indexing and retrieval. Ability to manage agent state, memory, and persistence for multi-turn and long-running interactions. Proficiency leveraging AI-assisted coding features in Firebase Studio (chat, inline code, command execution) and using App Prototyping agents or frameworks like Genkit for rapid prototyping and structuring agentic logic. Strong command of modern development workflows, including Git/GitHub, code reviews, and collaborative development practices. Experience designing scalable, fault-tolerant deployment architectures for multimodal and agentic AI applications using Firebase App Hosting, Cloud Run, or similar serverless/cloud platforms. Advanced MLOps skills, including monitoring, logging, alerting, and versioning for generative AI systems and agents. Deep understanding of security best practices: prompt injection mitigation (across modalities), secure API key management, authentication/authorization, and data privacy. Demonstrated ability to engineer for responsible AI, including bias detection, fairness, transparency, and implementation of safety mechanisms in agentic and multimodal applications. Experience addressing ethical challenges in the deployment and operation of advanced AI systems. Proven success designing, reviewing, and delivering advanced, project-based curriculum and hands-on labs for experienced software developers and engineers. Ability to translate complex engineering concepts (RAG, multimodal integration, agentic patterns, MLOps, security, responsible AI) into clear, actionable learning materials and real world projects. 5+ years of professional experience in AI-powered application development, with a focus on generative and multimodal AI. Strong programming skills in Python and JavaScript/TypeScript; experience with modern frameworks and cloud-native development. Bachelor’s or Master’s degree in Computer Science, Data Engineering, AI, or a related technical field. Ability to explain advanced technical concepts (e.g., fusion transformers, multimodal embeddings, RAG workflows) to learners in an accessible way. Strong programming experience in Python and experience deploying machine learning pipelines Ability to work independently, take ownership of deliverables, and collaborate closely with designers and project managers Preferred: Experience with Google DeepMind tools (JAX, Haiku, RLax, DeepMind Control Suite/Lab) and reinforcement learning pipelines. Familiarity with open data formats (Delta, Parquet, Iceberg) and scalable data engineering practices. Prior contributions to open-source AI projects or technical community engagement. Show more Show less
Posted 13 hours ago
5.0 - 8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Dear Candidate, Greetings from LTIMindtree !!! Your Profile got shortlisted for Technical Round of Interview. I hope you have a great day, Skills - Data Analyst Location – Hyderabad , Pune, Mumbai, Kolkata, Bangalore, Chennai Notice : Immediate to 15 days PFB JD FYR. 5 to 8 years experience in information technology Business Analysis Data Coverage analysis and Identify data gaps understanding of product and channel hierarchies data transformation and aggregations Strong functional and technical knowledge on Retail Industry SalesOnlineOffline CRM Good understanding of ETL SQl Server and BI tools An ability to align influence stakeholders and build working relationships A confident and articulate communicator capable of inspiring strong collaboration Good knowledge of IT systems and processes Strong analytical problem solving and project management skills Attention to detail and complex processes Business engagement and stakeholder management Partner with business team to identify needs and analytics opportunities Supervise and guide vendor partners to develop and maintain a data warehouse platform and BI reporting Work with management to prioritize business and information needs Mining data from sources then reorganizing the data in target format Performing data analyses between LOreal database and from business requirements Interpret data analyze results using statistical techniques and provide ongoing reports Find out the mapping and gaps Provide transformation logic Research and verify the logic and relationship between dataset and KPIs Filter and clean data by reviewing reports and performance indicators to locate and correct code problems If Interested , Kindly share your updated resume & fill below link : https://forms.office.com/r/EdFKPCNVaA We shall get back to you soon regarding the further steps. Show more Show less
Posted 19 hours ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Req ID: 299670 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Systems Integration Analyst to join our team in Noida, Uttar Pradesh (IN-UP), India (IN). Position General Duties and Tasks: Participate in research, design, implementation, and optimization of Machine learning Models Help AI product managers and business stakeholders understand the potential and limitations of AI when planning new products Understanding of Revenue Cycle Management processes like Claims filing and adjudication Hands on experience in Python Build data ingest and data transformation platform Identify transfer learning opportunities and new training datasets Build AI models from scratch and help product managers and stakeholders understand results Analysing the ML algorithms that could be used to solve a given problem and ranking them by their success probability Exploring and visualizing data to gain an understanding of it, then identifying differences in data distribution that could affect performance when deploying the model in the real world Verifying data quality, and/or ensuring it via data cleaning Supervising the data acquisition process if more data is needed Defining validation strategies Defining the pre-processing or feature engineering to be done on a given dataset Training models and tuning their hyperparameters Analysing the errors of the model and designing strategies to overcome them Deploying models to production Create APIs and help business customers put results of your AI models into operations JD Education Bachelor’s in computer sciences or similar. Masters preferred. Skills hands on programming experience working on enterprise products Demonstrated proficiency in multiple programming languages with a strong foundation in a statistical platform such as Python, R, SAS, or MatLab. Knowledge in Deep Learning/Machine learning, Artificial Intelligence Experience in building AI models using algorithms of Classification & Clustering techniques Expertise in visualizing and manipulating big datasets Strong in MS SQL Acumen to take a complex problem and break it down to workable pieces, to code a solution Excellent verbal and written communication skills Ability to work in and define a fast pace and team focused environment Proven record of delivering and completing assigned projects and initiatives Ability to deploy large scale solutions to an enterprise estate Strong interpersonal skills Understanding of Revenue Cycle Management processes like Claims filing and adjudication is a plus About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com NTT DATA is an equal opportunity employer and considers all applicants without regarding to race, color, religion, citizenship, national origin, ancestry, age, sex, sexual orientation, gender identity, genetic information, physical or mental disability, veteran or marital status, or any other characteristic protected by law. We are committed to creating a diverse and inclusive environment for all employees. If you need assistance or an accommodation due to a disability, please inform your recruiter so that we may connect you with the appropriate team. Show more Show less
Posted 20 hours ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Position General Duties and Tasks: Participate in research, design, implementation, and optimization of Machine learning Models Help AI product managers and business stakeholders understand the potential and limitations of AI when planning new products Understanding of Revenue Cycle Management processes like Claims filing and adjudication Hands on experience in Python Build data ingest and data transformation platform Identify transfer learning opportunities and new training datasets Build AI models from scratch and help product managers and stakeholders understand results Analysing the ML algorithms that could be used to solve a given problem and ranking them by their success probability Exploring and visualizing data to gain an understanding of it, then identifying differences in data distribution that could affect performance when deploying the model in the real world Verifying data quality, and/or ensuring it via data cleaning Supervising the data acquisition process if more data is needed Defining validation strategies Defining the pre-processing or feature engineering to be done on a given dataset Training models and tuning their hyperparameters Analysing the errors of the model and designing strategies to overcome them Deploying models to production Create APIs and help business customers put results of your AI models into operations JD Education Bachelor’s in computer sciences or similar. Skills hands on programming experience working on enterprise products Demonstrated proficiency in multiple programming languages with a strong foundation in a statistical platform such as Python, R, SAS, or MatLab. Knowledge in Deep Learning/Machine learning, Artificial Intelligence Experience in building AI models using algorithms of Classification & Clustering techniques Expertise in visualizing and manipulating big datasets Strong in MS SQL Acumen to take a complex problem and break it down to workable pieces, to code a solution Excellent verbal and written communication skills Ability to work in and define a fast pace and team focused environment Proven record of delivering and completing assigned projects and initiatives Ability to deploy large scale solutions to an enterprise estate Strong interpersonal skills Understanding of Revenue Cycle Management processes like Claims filing and adjudication is a plus Show more Show less
Posted 20 hours ago
1.0 - 5.0 years
0 - 0 Lacs
Shāhdara
On-site
Job Title: Artificial Intelligence Engineer Company: Humanoid Maker Location: Delhi Type: Full-Time Experience: 1–5 years Industry: Artificial Intelligence / Software Development / Robotics About Us: Humanoid Maker is a fast-growing innovator in AI, robotics, and automation solutions. We specialize in AI-powered software, robotic kits, refurbished IT hardware, and technology services that empower startups, businesses, and institutions across India. Our mission is to build intelligent systems that simplify lives and boost productivity. Role Overview: We are looking for a skilled and creative Artificial Intelligence Engineer to join our AI development team. This role involves building, training, and deploying machine learning models for various use cases including voice synthesis, natural language processing, computer vision, and robotics integration. Key Responsibilities: Develop, train, and fine-tune machine learning and deep learning models for real-world applications. Work on projects related to NLP, speech recognition, voice cloning, and robotic intelligence. Build APIs and tools to integrate AI features into applications and hardware systems. Optimize models for performance and deploy them in edge, desktop, or server environments. Collaborate with UI/UX and backend developers for full-stack AI system integration. Conduct data preprocessing, feature engineering, and dataset management. Continuously research and experiment with the latest AI advancements. Key Skills & Requirements: Bachelor’s or Master’s degree in Computer Science, AI, Data Science, or a related field. 1–5 years of hands-on experience in developing AI/ML models. Strong knowledge of Python, PyTorch, TensorFlow, scikit-learn, etc. Experience with NLP libraries (e.g., Hugging Face Transformers, spaCy), and/or CV frameworks (OpenCV, YOLO). Familiarity with APIs, web frameworks (Flask/FastAPI), and databases (SQL or NoSQL). Bonus: Experience in robotics integration or embedded AI (Arduino/Raspberry Pi). What We Offer: Competitive salary based on experience. Opportunity to work on cutting-edge AI and robotics projects. Friendly and innovative team environment. Career growth in a rapidly expanding AI company. Access to high-performance computing tools and training resources. Job Types: Full-time, Permanent Pay: ₹10,000.00 - ₹50,000.00 per month Benefits: Paid sick time Supplemental Pay: Commission pay Overtime pay Performance bonus Yearly bonus Education: Bachelor's (Required) Experience: AI: 1 year (Required) Language: Hindi (Required) Work Location: In person
Posted 1 day ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About G2 - The Company When you join G2, you’re joining the team that helps businesses reach their peak potential by powering decisions and strategies with trusted insights from real software users. G2 is the world's largest and most trusted software marketplace. More than 100 million people annually — including employees at all Fortune 500 companies — use G2 to make smarter software decisions based on authentic peer reviews. Thousands of software and services companies of all sizes partner with G2 to build their reputation and grow their business — including Salesforce, HubSpot, Zoom, and Adobe. To learn more about where you go for software, visit www.g2.com and follow us on LinkedIn. As we continue on our growth journey, we are striving to be the most trusted data source in the age of AI for informing software buying decisions and go-to-market strategies. Does that sound exciting to you? Come join us as we try to reach our next PEAK! About G2 - Our People At G2, we have big goals, but we stay grounded in our PEAK ( P erformance + E ntrepreneurship + A uthenticity + K indness) values. You’ll be part of a value-driven, growing global community that climbs PEAKs together. We cheer for each other’s successes, learn from our mistakes, and support and lean on one another during challenging times. With ambition and entrepreneurial spirit we push each other to take on challenging work, which will help us all to grow and learn. You will be part of a global, diverse team of smart, dedicated, and kind individuals - each with unique talents, aspirations, and life experiences. At the heart of our community and culture are our people-led ERGs, which celebrate and highlight the diverse identities of our global team. As an organization, we are intentional about our DEI and philanthropic work (like our G2 Gives program) because it encourages us all to be better people. About The Role G2 is looking for a Software Engineer to join our growing team! You will be responsible for helping develop solutions with a strong emphasis on code design and quality. We enjoy quarterly weeks of creativity where engineers work to solve problems they see our customers have. If you wish to join a talented passionate team whose kindness and authenticity will help you grow then apply so we can start our conversation today! This position is based in Bengaluru and requires in-office attendance with a 5-day workweek. In This Role, You Will Report to Engineering Manager dedicated to the delivery team Develop a high-quality, stable, and well-tested web application Apply database skills against a large and growing dataset Create and improve full features in short development cycles, including effective frontend and backend code Work in close coordination with designers, product managers, and business stakeholders Track metrics and measurements alongside core features to help make informed decisions Balance development with collaborative meetings Use patterns of code decomposition to break down tasks into deliverable solutions Ensure quality releases by writing tests covering unit, integration and functional requirements Minimum Qualifications 3+ years of professional programming experience, ideally in a web application environment Proficient in Ruby and Ruby on Rails, with working knowledge of JavaScript. Experience building and shipping products, not just as a hands-on implementor but as a collaborator who contributes ideas and helps shape the roadmap Comfort with evaluating and integrating AI into workflows, including understanding where AI adds value—and where it doesn’t Familiarity with high-performing, agile development teams and best practices like CI/CD, code reviews, and feature flags Strong opinions on software architecture and development practices, grounded in real-world experience building and maintaining production systems What Can Help Your Application Stand Out Exposure to building AI-first features (e.g., workflow automation, generative AI, intelligent UIs) Prior programming experience in a web environment Degree in Computer Science or a completed Bootcamp Git based version control Database skills such as SQL within Postgresql Experience working within a design system to ensure visual and interaction consistency. Hotwire and Tailwind CSS experience is a bonus Our Commitment to Inclusivity and Diversity At G2, we are committed to creating an inclusive and diverse environment where people of every background can thrive and feel welcome. We consider applicants without regard to race, color, creed, religion, national origin, genetic information, gender identity or expression, sexual orientation, pregnancy, age, or marital, veteran, or physical or mental disability status. Learn more about our commitments here. -- For job applicants in California, the United Kingdom, and the European Union, please review this applicant privacy notice before applying to this job. Show more Show less
Posted 1 day ago
0.0 - 1.0 years
0 Lacs
Shahdara, Delhi, Delhi
On-site
Job Title: Artificial Intelligence Engineer Company: Humanoid Maker Location: Delhi Type: Full-Time Experience: 1–5 years Industry: Artificial Intelligence / Software Development / Robotics About Us: Humanoid Maker is a fast-growing innovator in AI, robotics, and automation solutions. We specialize in AI-powered software, robotic kits, refurbished IT hardware, and technology services that empower startups, businesses, and institutions across India. Our mission is to build intelligent systems that simplify lives and boost productivity. Role Overview: We are looking for a skilled and creative Artificial Intelligence Engineer to join our AI development team. This role involves building, training, and deploying machine learning models for various use cases including voice synthesis, natural language processing, computer vision, and robotics integration. Key Responsibilities: Develop, train, and fine-tune machine learning and deep learning models for real-world applications. Work on projects related to NLP, speech recognition, voice cloning, and robotic intelligence. Build APIs and tools to integrate AI features into applications and hardware systems. Optimize models for performance and deploy them in edge, desktop, or server environments. Collaborate with UI/UX and backend developers for full-stack AI system integration. Conduct data preprocessing, feature engineering, and dataset management. Continuously research and experiment with the latest AI advancements. Key Skills & Requirements: Bachelor’s or Master’s degree in Computer Science, AI, Data Science, or a related field. 1–5 years of hands-on experience in developing AI/ML models. Strong knowledge of Python, PyTorch, TensorFlow, scikit-learn, etc. Experience with NLP libraries (e.g., Hugging Face Transformers, spaCy), and/or CV frameworks (OpenCV, YOLO). Familiarity with APIs, web frameworks (Flask/FastAPI), and databases (SQL or NoSQL). Bonus: Experience in robotics integration or embedded AI (Arduino/Raspberry Pi). What We Offer: Competitive salary based on experience. Opportunity to work on cutting-edge AI and robotics projects. Friendly and innovative team environment. Career growth in a rapidly expanding AI company. Access to high-performance computing tools and training resources. Job Types: Full-time, Permanent Pay: ₹10,000.00 - ₹50,000.00 per month Benefits: Paid sick time Supplemental Pay: Commission pay Overtime pay Performance bonus Yearly bonus Education: Bachelor's (Required) Experience: AI: 1 year (Required) Language: Hindi (Required) Work Location: In person
Posted 2 days ago
10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Calling all innovators – find your future at Fiserv. We’re Fiserv, a global leader in Fintech and payments, and we move money and information in a way that moves the world. We connect financial institutions, corporations, merchants, and consumers to one another millions of times a day – quickly, reliably, and securely. Any time you swipe your credit card, pay through a mobile app, or withdraw money from the bank, we’re involved. If you want to make an impact on a global scale, come make a difference at Fiserv. Job Title Advisor Application Support What a Successful File Monitoring Automation Using Python Involves Design and Development Automation Script: Design and maintain advanced Python scripts to deliver comprehensive insights into File Transmission component and its various Life Cycle. Performance Optimization: Improve efficiency when handling large datasets using techniques such as optimized large data manipulation, and RDBMS data models. Advanced Regex Utilization: Apply sophisticated regular expressions to create accurate field extraction and mapping to the large dataset. File Transmission Monitoring Automation: Track and report on each stage of file transmission, continuously refining monitoring strategies for enhanced reliability and visibility. Cross-Functional Collaboration: Work closely with various teams to integrate Python script with broader IT systems and workflows. Develop and maintain automation scripts using Python for testing, data validation, and system operations. Design and implement automation frameworks. Automate File Transmission applications using Python and Selenium. Maintain automated workflows and troubleshooting issues in context of File Transmissions. Write reusable, scalable, and maintainable code with proper documentation. What You Will Need To Have Education: Bachelor’s and/or Master’s degree in Information Technology, Computer Science, or a related field. Experience: Minimum of 10 years in IT, with a focus on Python, SFTP tools, data integration, or technical support roles. Proficiency in Python programming. Experience with Selenium for automation. Familiarity with test automation frameworks like PyTest or Robot Framework. Understanding of REST APIs and tools like Postman or Python requests. Basic knowledge of Linux/Unix environments and shell scripting. Database Skills: Experience with relational databases and writing complex SQL queries with advanced joins. File Transmission Tools: Hands-on experience with platforms like Sterling File Gateway, IBM Sterling, or other MFT solutions. Analytical Thinking: Proven problem-solving skills and the ability to troubleshoot technical issues effectively. Communication: Strong verbal and written communication skills for collaboration with internal and external stakeholders. What Would Be Great To Have (Optional) Tool Experience: Familiarity with tools such as Splunk, Dynatrace, Sterling File Gateway, File Transfer tool. Linux: Working knowledge of Linux and command-line operations. Secure File Transfer Protocols: Hands-on experience with SFTP and tools like SFG, NDM, and MFT using SSH encryption. Task Scheduling Tools: Experience with job scheduling platforms such as AutoSys, Control-M, or cron. Thank You For Considering Employment With Fiserv. Please Apply using your legal name Complete the step-by-step profile and attach your resume (either is acceptable, both are preferable). Our Commitment To Diversity And Inclusion Fiserv is proud to be an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, gender, gender identity, sexual orientation, age, disability, protected veteran status, or any other category protected by law. Note To Agencies Fiserv does not accept resume submissions from agencies outside of existing agreements. Please do not send resumes to Fiserv associates. Fiserv is not responsible for any fees associated with unsolicited resume submissions. Warning About Fake Job Posts Please be aware of fraudulent job postings that are not affiliated with Fiserv. Fraudulent job postings may be used by cyber criminals to target your personally identifiable information and/or to steal money or financial information. Any communications from a Fiserv representative will come from a legitimate Fiserv email address. Show more Show less
Posted 2 days ago
12.0 years
0 Lacs
Nandigama, Telangana, India
On-site
At Johnson & Johnson, we believe health is everything. Our strength in healthcare innovation empowers us to build a world where complex diseases are prevented, treated, and cured, where treatments are smarter and less invasive, and solutions are personal. Through our expertise in Innovative Medicine and MedTech, we are uniquely positioned to innovate across the full spectrum of healthcare solutions today to deliver the breakthroughs of tomorrow, and profoundly impact health for humanity. Learn more at https://www.jnj.com Job Function Data Analytics & Computational Sciences Job Sub Function Biostatistics Job Category Scientific/Technology All Job Posting Locations: Bangalore, Karnataka, India, Mumbai, India, PENJERLA, Telangana, India Job Description Position Summary: The Principal Programming Lead is a highly skilled Programmer with expert knowledge of programming languages, tools, and complex data structures, industry standards. The position requires proven technical and analytic abilities and strong capabilities in leading activities and programming teams in accordance with departmental processes and procedures. As a highly experienced Principal Programming Lead, they apply expert technical, scientific, problem-solving skills providing innovative and forward-thinking solutions to ensure operational efficiency across assigned projects providing training, coaching, mentoring to other programmers. The Principal Programming Lead position is accountable for the planning, oversight, and delivery of programming activities in support of one or more clinical projects, compounds, or submissions of high complexity and criticality. In this role, the Principal Programming Lead is responsible for making decisions and recommendations that impact the efficiency, timeliness, and quality of deliverables with a high degree of autonomy and provide leadership, direction and technical and project specific guidance to programming teams. In addition, this position may lead and contribute expert knowledge and technical skills to assigned delivery unit, departmental innovation, and process improvement projects. Principal Responsibilities Designs and develops efficient programs and technical solutions in support of highly complex/critical clinical research analysis and reporting activities, including urgent/on-demand analysis requests. Provides technical and project specific guidance to programming team members to ensure high quality and on-time deliverables in compliance with departmental processes. Coordinates and oversees programming team activities and may provide matrix leadership to one or more programming teams as needed. Shares knowledge and provides guidance and coaching to programmers in developing advanced technical and analytical abilities. Performs comprehensive review of, and provides input into, project requirements and documentation. Collaborates effectively with programming and cross-functional team members and counterparts to achieve project goals and independently manages escalations. As applicable, oversees programming activities outsourced to third party vendors adopting appropriate processes and best practices to ensure their performance meets the agreed upon scope, timelines, and quality. Responsible for adoption of new processes & technology on assigned projects/programs in collaboration with departmental technical groups and programming portfolio leads. Contributes to and may lead departmental innovation and process improvement projects and may contribute programming expertise to cross functional projects/initiatives. May play the role of a Delivery Unit/Disease Area Expert. Ensures continued compliance of project/programs and required company and departmental training, time reporting, and other business/operational processes as required for position. Clinical Programming Oversees the design, development, validation, management, and maintenance of clinical databases according to established standards. Responsible for implementation of data tabulation standards. Performs data cleaning by programming edit checks and data review listings and Data reporting by creating data visualizations and listings for medical monitoring and central monitoring. Statistical Programming Responsible for implementation of data and analysis standards ensuring consistency in analysis dataset design across trials within a program. Principal Relationships The Principal Programming Lead reports into a people manager position within the Delivery unit and is accountable to the Portfolio Lead for assigned activities and responsibilities. Functional contacts within IDAR include but are not limited to: Leaders and leads in Data Management and Central Monitoring, Programming Leads, Clinical Data Standards, Regulatory Medical Writing Leads, and system support organizations. Functional Contacts within J&J Innovative Medicine (as collaborator or peer) include but are not limited to: Statistics, Clinical, Global Medical Safety, Project Management, Procurement, Finance, Legal, Global Privacy, Regulatory, Strategic Partnerships, Human Resources. External contacts include but are not limited to external partners, CRO management and vendor liaisons, industry peers and working groups. Education And Experience Requirements Bachelor's degree (e.g., BS, BA) or equivalent professional experience is required, preferably in Computer Sciences, Mathematics, Data Science/Engineering, Public Health, or another relevant scientific field (or equivalent theoretical/technical depth). Advanced degrees preferred (e.g., Master, PhD). Experience And Skills Required Approx. 12+ years of experience in Pharmaceutical, CRO or Biotech industry or related field or industry. In-depth knowledge of programming practices (including tools and processes). Working knowledge of relevant regulatory guidelines (e.g., ICH-GCP, 21 CFR Part 11) Project, risk, and team management and an established track record leading teams to successful outcomes. Excellent planning and coordination of project delivery. Established track record collaborating with multi-functional teams in a matrix environment and partnering with/managing stakeholders, customers, and vendors. Excellent communication, leadership, influencing and decision-making skills, and demonstrated ability to foster team productivity and cohesiveness adapting to rapidly changing organizations and business environments. Experience managing the outsourcing or externalization of programming activities in the clinical trials setting (e.g., Working with CROs, academic institutions) preferred experience. Demonstrated experience managing the outsourcing or externalization of clinical programming activities in the clinical trials setting (e.g., working with CROs, academic institutions) is preferred. Expert CDISC Standards knowledge. Expert knowledge of relevant programming languages for data manipulation and reporting. May include SAS, R, Python, etc. Knowledge of SAS is required for a Clinical Programming role. Excellent written and verbal communications and influencing and negotiation skills. Advanced knowledge of programming and industry standard data structures, thorough understanding of end-to-end clinical trial process and relevant clinical research concepts. Other Innovative thinking allows for optimal design and execution of programming development strategies. Development and implementation of a business change/innovative way of working. Show more Show less
Posted 2 days ago
15.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
This job is with Johnson & Johnson, an inclusive employer and a member of myGwork – the largest global platform for the LGBTQ+ business community. Please do not contact the recruiter directly. At Johnson & Johnson, we believe health is everything. Our strength in healthcare innovation empowers us to build a world where complex diseases are prevented, treated, and cured, where treatments are smarter and less invasive, and solutions are personal. Through our expertise in Innovative Medicine and MedTech, we are uniquely positioned to innovate across the full spectrum of healthcare solutions today to deliver the breakthroughs of tomorrow, and profoundly impact health for humanity. Learn more at https://www.jnj.com Job Function Data Analytics & Computational Sciences Job Sub Function Biostatistics Job Category People Leader All Job Posting Locations: Bangalore, Karnataka, India, Hyderabad, Andhra Pradesh, India Job Description Integrated Data Analytics and Reporting (IDAR) Associate Director Portfolio Lead Clinical Programming* (*Title may vary based on Region or Country requirements) Position Summary The Associate Director Portfolio Lead Clinical Programming is a highly experienced individual with expert understanding of programming strategies, practices, methods, processes, technologies, industry standards, complex data structures, and analysis and reporting solutions. This position requires strong project and people leadership skills with the capability to effectively coordinate and oversee programming activities across teams in accordance with company and departmental processes and procedures. As a portfolio leader, this position is responsible for formulating the Programming strategy across a large portfolio of one or more programs within a Disease area and/or Delivery Unit, with accountability for operational oversight and effective planning and execution of programming activities for their assigned portfolio. This position interfaces with program level Delivery Unit Leaders to provide regular status updates, identify and manage risks and issues, and ensures the appropriate use of escalation pathways to appropriate functional leaders as needed. This position provides functional area people and/or matrix leadership to departmental staff. The role is responsible for the recruitment, onboarding, performance management and development of people and future skills and technical knowledge expertise within their reporting line while building an inclusive and diverse working environment. The Associate Director Portfolio Lead Clinical Programming may also take on responsibilities of second line management (i.e. manager of managers). The Associate Director Portfolio Lead Clinical Programming role plays a critical role in the growth and development of C&SP and contributes to organizational effectiveness, transparency, and communication. Directly contributes to delivery of the J&J IM R&D portfolio through effective leadership and accountability of large or complex clinical development and strategic innovation of programs and projects. In collaboration with Senior departmental leadership, the Senior Manager Portfolio Lead influences departmental effectiveness acting as a change agent to shape, drive and implement the departmental strategic vision. This position develops strong and productive working relationships with key stakeholders within IDAR in addition to broader partners, external suppliers and/or industry groups. Principal Responsibilities As Project Leader: Drives the strategy and planning, execution, and completion of all programming activities and deliverables within assigned scope ensuring quality, compliance standards, consistency, and efficiency. Proactively evaluates and manage resource demand, allocation, utilization, and delivery to meet current and future business needs. Ensure timely and effective maintenance of functional planning systems. May include forecasting related to potential in-licensing and acquisitions. Independently and effectively manages issue escalations, adopting appropriate escalation pathways. Collaborates with cross-functional and external partners on programming related deliverables for co-development programs and defining data integration strategy of the assigned programs/projects. Ensures training compliance and development of appropriate job skills for assigned personnel. Contributes to the development of functional vendor contracts and oversees of delivery in line with agreed milestones and scope of work, R&D business planning and budget estimates. Serves as the primary point of contact for sourcing providers and is responsible for establishing a strategic partnership. Drives the enhancement of functional, technical and/or scientific capabilities within C&SP and shares best practices. Leads programming related aspects of regulatory agency inspections and J&J internal audits ensuring real time inspection readiness for all programming deliverables. Provides input to submission strategy to regulatory agencies and ensures all programming deliverables are complete and compliant. As People Leader Responsible for attracting and retaining top talent, proactively managing performance, and actively supporting talent development and succession planning. Ensures organizational effectiveness, transparency, and communication. Provides mentorship and coaching to programming team members. Ensures training compliance and development of appropriate job skills for assigned personnel. Oversees their work allocation, providing coaching and guidance as necessary. Responsible for local administration and decision making associated with the management of assigned personnel. As Matrix Leader Accountable for actively identifying opportunities, evaluating, and driving solutions to enhance efficiency and knowledge-sharing across programs, value streams and the department. Serves as departmental resource in areas of process and technical expertise. Stays current with industry trends and policies related to Programming. Leads departmental innovation and process improvement projects and as required, may contribute programming expertise to cross functional projects/initiatives. Provides strategic direction within Delivery Unit initiatives and projects. Serves as a programming expert and influencer on internal and external (industry) work groups. Clinical Programming Leader Oversees the design, development, validation, management, and maintenance of clinical databases according to established standards. Statistical Programming Leader Responsible for implementation of data and analysis standards ensuring consistency in analysis dataset design across trials within a program. Principal Relationships This role reports into a people manager position within the Delivery unit and is accountable to the Director of Programming for assigned activities and responsibilities. Functional contacts within IDAR include but are not limited to: Leaders and leads in Data Management and Central Monitoring, Programming Leads, Clinical Data Standards, Regulatory Medical Writing Leads, and system support organizations. Functional Contacts within J&J Innovative Medicine (as collaborator or peer) include but are not limited to: Statistics, Clinical, Global Medical Safety, Project Management, Procurement, Finance, Legal, Global Privacy, Regulatory, Strategic Partnerships, Human Resources. External contacts include but are not limited to external partners, CRO management and vendor liaisons, industry peers and working groups. Education And Experience Requirements Bachelor's degree (e.g., BS, BA) or equivalent professional experience is required, preferably in Computer Sciences, Mathematics, Data Science/Engineering, Public Health, or another relevant scientific field (or equivalent theoretical/technical depth). Advanced degrees preferred (e.g., Master, PhD). Experience And Skills Required Approx. 15+ years of experience in Pharmaceutical, CRO or Biotech industry or related field or industry. In-depth knowledge of programming practices (including tools and processes). In depth knowledge of regulatory guidelines (e.g., ICH-GCP). Project, risk, and team management and an established track record leading teams to successful outcomes. Excellent people management skills including staff performance management and people development. Excellent planning and coordinating of deliverables. Established track record collaborating with multi-functional teams in a matrix environment and partnering with/managing stakeholders, customers, and vendors. Excellent communication, leadership, influencing and decision-making skills, and demonstrated ability to foster team productivity and cohesiveness adapting to rapidly changing organizations and business environments. Excellent written and verbal communications skills. Demonstrated experience managing the outsourcing or externalization programming activities in the clinical trials setting (e.g. working with CROs, academic institutions) is preferred. Expert CDISC Standards knowledge. Expert knowledge of data structures and relevant programming languages for data manipulation and reporting. May include SAS, R, Python, etc. Other Innovative thinking to allow for optimal design and execution of clinical and/or statistical development strategies. Development and implementation of a business change/innovative way of working. Show more Show less
Posted 2 days ago
3.0 years
0 Lacs
India
On-site
In accordance with the strategic editorial plan, this position is primarily responsible for maintaining Sage Data and supporting major data project initiatives. This position will work closely with key product stakeholders on the library editorial, product development, publishing technologies, marketing/sales teams. About our Team: The Editorial Processing team at Sage is a dynamic and collaborative group dedicated to curating, maintaining, and enhancing high-quality digital resources for the academic community. We are passionate about data integrity, user experience, and delivering valuable insights through innovative data products like Sage Data. Working closely with stakeholders across editorial, technology, marketing, and product development, our team drives initiatives that ensure our resources meet the evolving needs of researchers, students, and librarians. We combine editorial excellence with technical acumen and project management skills, fostering an environment where detail-oriented, analytical, and creative professionals thrive. Joining our team means becoming part of a mission-driven culture that values precision, innovation, and collaboration, where every voice is heard and every contribution counts toward advancing knowledge and accessibility in the academic world. What is your team’s key role in the business? Our team plays a vital role in ensuring the quality, accuracy, and consistency of published content across all Learning Resource platforms. We act as the bridge between content creation and publication, managing the end-to-end editorial workflow with precision and efficiency. Our team is responsible for reviewing, formatting, and processing submissions to meet editorial standards and publication guidelines. From initial manuscript handling to final approvals, we ensure each piece meets rigorous quality benchmarks. With a strong focus on detail, timeliness, and consistency, the Editorial Processing Team supports the broader mission of delivering trusted, high-quality content to our audience. Our work may be behind the scenes, but it is foundational to the credibility and success of our publications. What other departments do you work closely with? Publishing Technologies / IT – to support content ingestion, interface functionality, and technical documentation. Product Development – to align editorial work with product strategy and feature enhancements. Sales and Marketing – to develop support materials and communicate product value to library customers and end users. Content Teams – to manage the ongoing acquisition, updating, and quality control of datasets. Customer Support / User Services – to ensure a seamless experience for users and address feedback or technical issues related to content. Key Accountabilities The essential job functions include, but are not limited to, the following for Sage data products: With Content team contribute to the content ingestion and update process for Sage data products. Create dataset metadata, ensuring accuracy and timeliness. Perform quality assurance checks on data content and content behavior on the Sage Data interface. Create and maintain technical documentation on the collection and ingest of Sage Data datasets from original sources. Contribute to development and maintenance of editorially created data product end user support materials. Work with the Executive Editor to assist Sales and Marketing in creating necessary support materials. Contribute to decision making about product functionality and content acquisitions. Skills, Qualifications & Experience Any combination equivalent t, but not limited to, the following: At least 3 years of publishing experience, preferably in developing digital resources, for the academic library market OR at least 3 years' experience in technical or digital services for a library, library consortium, archives or museum. Proficient computer and database skills; competency in the Microsoft 365 suite of software. Language skills, reasoning ability and analytical aptitude Exceptional reading and comprehension skills, with an ability to distil and communicate dense information concisely in English. Detail oriented with strong copyediting, proofreading, and quality assurance skills Effective listening, verbal and written communication skills Comfortable with technology Ability to foster effective relationships with marketing, IT, and product stakeholders. Ability to set and follow through on priorities Ability to plan and manage multiple projects and effectively multi-task Ability to effectively manage time to meet deadlines and work professionally under pressure Ability to maintain confidentiality and work with diplomacy Ability to reason and problem solve Proficient analytical and mathematical skills Effective public speaking and/or presenting to individuals and groups Diversity, Equity, and Inclusion At Sage we are committed to building a diverse and inclusive team that is representative of all sections of society and to sustaining a culture that celebrates difference, encourages authenticity, and creates a deep sense of belonging. We welcome applications from all members of society irrespective of age, disability, sex or gender identity, sexual orientation, color, race, nationality, ethnic or national origin, religion or belief as creating value through diversity is what makes us strong.
Posted 2 days ago
3.0 years
5 - 8 Lacs
Gurgaon
Remote
Job description About this role Want to elevate your career by being a part of the world's largest asset manager? Do you thrive in an environment that fosters positive relationships and recognizes stellar service? Are analyzing complex problems and identifying solutions your passion? Look no further. BlackRock is currently seeking a candidate to become part of our Global Investment Operations Data Engineering team. We recognize that strength comes from diversity, and will embrace your rare skills, eagerness, and passion while giving you the opportunity to grow professionally and as an individual. We know you want to feel valued every single day and be recognized for your contribution. At BlackRock, we strive to empower our employees and actively engage your involvement in our success. With over USD $11.5 trillion of assets under management, we have an extraordinary responsibility: our technology and services empower millions of investors to save for retirement, pay for college, buy a home and improve their financial well-being. Come join our team and experience what it feels like to be part of an organization that makes a difference. Technology & Operations Technology & Operations(T&O) is responsible for the firm's worldwide operations across all asset classes and geographies. The operational functions are aligned with clients, products, fund structures and our Third-party provider networks. Within T&O, Global Investment Operations (GIO) is responsible for the development of the firm's operating infrastructure to support BlackRock's investment businesses worldwide. GIO spans Trading & Market Documentation, Transaction Management, Collateral Management & Payments, Asset Servicing including Corporate Actions and Cash & Asset Operations, and Securities Lending Operations. GIO provides operational service to BlackRock's Portfolio Managers and Traders globally as well as industry leading service to our end clients. GIO Engineering Working in close partnership with GIO business users and other technology teams throughout Blackrock, GIO Engineering is responsible for developing and providing data and software solutions that support GIO business processes globally. GIO Engineering solutions combine technology, data, and domain expertise to drive exception-based, function-agnostic, service-orientated workflows, data pipelines, and management dashboards. The Role – GIO Engineering Data Lead Work to date has been focused on building out robust data pipelines and lakes relevant to specific business functions, along with associated pools and Tableau / PowerBI dashboards for internal BlackRock clients. The next stage in the project involves Azure / Snowflake integration and commercializing the offering so BlackRock’s 150+ Aladdin clients can leverage the same curated data products and dashboards that are available internally. The successful candidate will contribute to the technical design and delivery of a curated line of data products, related pipelines, and visualizations in collaboration with SMEs across GIO, Technology and Operations, and the Aladdin business. Responsibilities Specifically, we expect the role to involve the following core responsibilities and would expect a successful candidate to be able to demonstrate the following (not in order of priority) Design, develop and maintain a Data Analytics Infrastructure Work with a project manager or drive the project management of team deliverables Work with subject matter experts and users to understand the business and their requirements. Help determine the optimal dataset and structure to deliver on those user requirements Work within a standard data / technology deployment workflow to ensure that all deliverables and enhancements are provided in a disciplined, repeatable, and robust manner Work with team lead to understand and help prioritize the team’s queue of work Automate periodic (daily/weekly/monthly/Quarterly or other) reporting processes to minimize / eliminate associated developer BAU activities. Leverage industry standard and internal tooling whenever possible in order to reduce the amount of custom code that requires maintenance Experience 3+ years of experience in writing ETL, data curation and analytical jobs using Hadoop-based distributed computing technologies: Spark / PySpark, Hive, etc. 3+ years of knowledge and Experience of working with large enterprise databases preferably Cloud bases data bases/ data warehouses like Snowflake on Azure or AWS set-up Knowledge and Experience in working with Data Science / Machine / Gen AI Learning frameworks in Python, Azure/ openAI, meta tec. Knowledge and Experience building reporting and dashboards using BI Tools: Tableau, MS PowerBI, etc. Prior Experience working on Source Code version Management tools like GITHub etc. Prior experience working with and following Agile-based workflow paths and ticket-based development cycles Prior Experience setting-up infrastructure and working on Big Data analytics Strong analytical skills with the ability to collect, organize, analyse, and disseminate significant amounts of information with attention to detail and accuracy Experience working with SMEs / Business Analysts, and working with Stakeholders for sign-off Our benefits To help you stay energized, engaged and inspired, we offer a wide range of benefits including a strong retirement plan, tuition reimbursement, comprehensive healthcare, support for working parents and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about. Our hybrid work model BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock. About BlackRock At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress. This mission would not be possible without our smartest investment – the one we make in our employees. It’s why we’re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive. For additional information on BlackRock, please visit @blackrock | Twitter: @blackrock | LinkedIn: www.linkedin.com/company/blackrock BlackRock is proud to be an Equal Opportunity Employer. We evaluate qualified applicants without regard to age, disability, family status, gender identity, race, religion, sex, sexual orientation and other protected attributes at law. Job Requisition # R254094
Posted 2 days ago
12.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
At Johnson & Johnson, we believe health is everything. Our strength in healthcare innovation empowers us to build a world where complex diseases are prevented, treated, and cured, where treatments are smarter and less invasive, and solutions are personal. Through our expertise in Innovative Medicine and MedTech, we are uniquely positioned to innovate across the full spectrum of healthcare solutions today to deliver the breakthroughs of tomorrow, and profoundly impact health for humanity. Learn more at https://www.jnj.com Job Function Data Analytics & Computational Sciences Job Sub Function Biostatistics Job Category Scientific/Technology All Job Posting Locations: Bangalore, Karnataka, India, Mumbai, India, PENJERLA, Telangana, India Job Description Position Summary: The Principal Programming Lead is a highly skilled Programmer with expert knowledge of programming languages, tools, and complex data structures, industry standards. The position requires proven technical and analytic abilities and strong capabilities in leading activities and programming teams in accordance with departmental processes and procedures. As a highly experienced Principal Programming Lead, they apply expert technical, scientific, problem-solving skills providing innovative and forward-thinking solutions to ensure operational efficiency across assigned projects providing training, coaching, mentoring to other programmers. The Principal Programming Lead position is accountable for the planning, oversight, and delivery of programming activities in support of one or more clinical projects, compounds, or submissions of high complexity and criticality. In this role, the Principal Programming Lead is responsible for making decisions and recommendations that impact the efficiency, timeliness, and quality of deliverables with a high degree of autonomy and provide leadership, direction and technical and project specific guidance to programming teams. In addition, this position may lead and contribute expert knowledge and technical skills to assigned delivery unit, departmental innovation, and process improvement projects. Principal Responsibilities Designs and develops efficient programs and technical solutions in support of highly complex/critical clinical research analysis and reporting activities, including urgent/on-demand analysis requests. Provides technical and project specific guidance to programming team members to ensure high quality and on-time deliverables in compliance with departmental processes. Coordinates and oversees programming team activities and may provide matrix leadership to one or more programming teams as needed. Shares knowledge and provides guidance and coaching to programmers in developing advanced technical and analytical abilities. Performs comprehensive review of, and provides input into, project requirements and documentation. Collaborates effectively with programming and cross-functional team members and counterparts to achieve project goals and independently manages escalations. As applicable, oversees programming activities outsourced to third party vendors adopting appropriate processes and best practices to ensure their performance meets the agreed upon scope, timelines, and quality. Responsible for adoption of new processes & technology on assigned projects/programs in collaboration with departmental technical groups and programming portfolio leads. Contributes to and may lead departmental innovation and process improvement projects and may contribute programming expertise to cross functional projects/initiatives. May play the role of a Delivery Unit/Disease Area Expert. Ensures continued compliance of project/programs and required company and departmental training, time reporting, and other business/operational processes as required for position. Clinical Programming Oversees the design, development, validation, management, and maintenance of clinical databases according to established standards. Responsible for implementation of data tabulation standards. Performs data cleaning by programming edit checks and data review listings and Data reporting by creating data visualizations and listings for medical monitoring and central monitoring. Statistical Programming Responsible for implementation of data and analysis standards ensuring consistency in analysis dataset design across trials within a program. Principal Relationships The Principal Programming Lead reports into a people manager position within the Delivery unit and is accountable to the Portfolio Lead for assigned activities and responsibilities. Functional contacts within IDAR include but are not limited to: Leaders and leads in Data Management and Central Monitoring, Programming Leads, Clinical Data Standards, Regulatory Medical Writing Leads, and system support organizations. Functional Contacts within J&J Innovative Medicine (as collaborator or peer) include but are not limited to: Statistics, Clinical, Global Medical Safety, Project Management, Procurement, Finance, Legal, Global Privacy, Regulatory, Strategic Partnerships, Human Resources. External contacts include but are not limited to external partners, CRO management and vendor liaisons, industry peers and working groups. Education And Experience Requirements Bachelor's degree (e.g., BS, BA) or equivalent professional experience is required, preferably in Computer Sciences, Mathematics, Data Science/Engineering, Public Health, or another relevant scientific field (or equivalent theoretical/technical depth). Advanced degrees preferred (e.g., Master, PhD). Experience And Skills Required Approx. 12+ years of experience in Pharmaceutical, CRO or Biotech industry or related field or industry. In-depth knowledge of programming practices (including tools and processes). Working knowledge of relevant regulatory guidelines (e.g., ICH-GCP, 21 CFR Part 11) Project, risk, and team management and an established track record leading teams to successful outcomes. Excellent planning and coordination of project delivery. Established track record collaborating with multi-functional teams in a matrix environment and partnering with/managing stakeholders, customers, and vendors. Excellent communication, leadership, influencing and decision-making skills, and demonstrated ability to foster team productivity and cohesiveness adapting to rapidly changing organizations and business environments. Experience managing the outsourcing or externalization of programming activities in the clinical trials setting (e.g., Working with CROs, academic institutions) preferred experience. Demonstrated experience managing the outsourcing or externalization of clinical programming activities in the clinical trials setting (e.g., working with CROs, academic institutions) is preferred. Expert CDISC Standards knowledge. Expert knowledge of relevant programming languages for data manipulation and reporting. May include SAS, R, Python, etc. Knowledge of SAS is required for a Clinical Programming role. Excellent written and verbal communications and influencing and negotiation skills. Advanced knowledge of programming and industry standard data structures, thorough understanding of end-to-end clinical trial process and relevant clinical research concepts. Other Innovative thinking allows for optimal design and execution of programming development strategies. Development and implementation of a business change/innovative way of working. Show more Show less
Posted 2 days ago
5.0 - 7.0 years
0 Lacs
India
On-site
About the role: As a Data Engineer, you will be instrumental in managing our extensive soil carbon dataset and creating robust data systems. You are expected to be involved in the full project lifecycle, from planning and design, through development, and onto maintenance, including pipelines and dashboards. You’ll interact with Product Managers, Project Managers, Business Development and Operations teams to understand business demands and translate them into technical solutions. Your goal is to provide an organisation-wide source of truth for various downstream activities while also working towards improving and modernising our current platform. Key responsibilities: Design, develop, and maintain scalable data pipelines to process soil carbon and agricultural data Create and optimise database schemas and queries Implement data quality controls and validation processes Adapt existing data flows and schemas to new products and services under development Required qualifications: BS/B. Tech in Computer Science or equivalent practical experience, with 5-7 years as a Data Engineer or similar role. Strong SQL skills and experience optimising complex queries Proficiency with relational databases, preferably MySQL Experience building data pipelines, transformations, and dashboards Ability to troubleshoot and fix performance and data issues across the database Experience with AWS services (especially Glue, S3, RDS) Exposure to big data eco-system – Snowflake/Redshift/Tableau/Looker Python programming skills Excellent written and verbal communication skills in English An ideal candidate would also have: High degree of attention to detail to uncover data discrepancies and fix them Familiarity with geospatial data Experience with scientific or environmental datasets Some understanding of the agritech or environmental sustainability sectors Show more Show less
Posted 2 days ago
3.0 years
0 Lacs
Gurgaon, Haryana, India
Remote
About This Role Want to elevate your career by being a part of the world's largest asset manager? Do you thrive in an environment that fosters positive relationships and recognizes stellar service? Are analyzing complex problems and identifying solutions your passion? Look no further. BlackRock is currently seeking a candidate to become part of our Global Investment Operations Data Engineering team. We recognize that strength comes from diversity, and will embrace your rare skills, eagerness, and passion while giving you the opportunity to grow professionally and as an individual. We know you want to feel valued every single day and be recognized for your contribution. At BlackRock, we strive to empower our employees and actively engage your involvement in our success. With over USD $11.5 trillion of assets under management, we have an extraordinary responsibility: our technology and services empower millions of investors to save for retirement, pay for college, buy a home and improve their financial well-being. Come join our team and experience what it feels like to be part of an organization that makes a difference. Technology & Operations Technology & Operations(T&O) is responsible for the firm's worldwide operations across all asset classes and geographies. The operational functions are aligned with clients, products, fund structures and our Third-party provider networks. Within T&O, Global Investment Operations (GIO) is responsible for the development of the firm's operating infrastructure to support BlackRock's investment businesses worldwide. GIO spans Trading & Market Documentation, Transaction Management, Collateral Management & Payments, Asset Servicing including Corporate Actions and Cash & Asset Operations, and Securities Lending Operations. GIO provides operational service to BlackRock's Portfolio Managers and Traders globally as well as industry leading service to our end clients. GIO Engineering Working in close partnership with GIO business users and other technology teams throughout Blackrock, GIO Engineering is responsible for developing and providing data and software solutions that support GIO business processes globally. GIO Engineering solutions combine technology, data, and domain expertise to drive exception-based, function-agnostic, service-orientated workflows, data pipelines, and management dashboards. The Role – GIO Engineering Data Lead Work to date has been focused on building out robust data pipelines and lakes relevant to specific business functions, along with associated pools and Tableau / PowerBI dashboards for internal BlackRock clients. The next stage in the project involves Azure / Snowflake integration and commercializing the offering so BlackRock’s 150+ Aladdin clients can leverage the same curated data products and dashboards that are available internally. The successful candidate will contribute to the technical design and delivery of a curated line of data products, related pipelines, and visualizations in collaboration with SMEs across GIO, Technology and Operations, and the Aladdin business. Responsibilities Specifically, we expect the role to involve the following core responsibilities and would expect a successful candidate to be able to demonstrate the following (not in order of priority) Design, develop and maintain a Data Analytics Infrastructure Work with a project manager or drive the project management of team deliverables Work with subject matter experts and users to understand the business and their requirements. Help determine the optimal dataset and structure to deliver on those user requirements Work within a standard data / technology deployment workflow to ensure that all deliverables and enhancements are provided in a disciplined, repeatable, and robust manner Work with team lead to understand and help prioritize the team’s queue of work Automate periodic (daily/weekly/monthly/Quarterly or other) reporting processes to minimize / eliminate associated developer BAU activities. Leverage industry standard and internal tooling whenever possible in order to reduce the amount of custom code that requires maintenance Experience 3+ years of experience in writing ETL, data curation and analytical jobs using Hadoop-based distributed computing technologies: Spark / PySpark, Hive, etc. 3+ years of knowledge and Experience of working with large enterprise databases preferably Cloud bases data bases/ data warehouses like Snowflake on Azure or AWS set-up Knowledge and Experience in working with Data Science / Machine / Gen AI Learning frameworks in Python, Azure/ openAI, meta tec. Knowledge and Experience building reporting and dashboards using BI Tools: Tableau, MS PowerBI, etc. Prior Experience working on Source Code version Management tools like GITHub etc. Prior experience working with and following Agile-based workflow paths and ticket-based development cycles Prior Experience setting-up infrastructure and working on Big Data analytics Strong analytical skills with the ability to collect, organize, analyse, and disseminate significant amounts of information with attention to detail and accuracy Experience working with SMEs / Business Analysts, and working with Stakeholders for sign-off Our Benefits To help you stay energized, engaged and inspired, we offer a wide range of benefits including a strong retirement plan, tuition reimbursement, comprehensive healthcare, support for working parents and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about. Our hybrid work model BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock. About BlackRock At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress. This mission would not be possible without our smartest investment – the one we make in our employees. It’s why we’re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive. For additional information on BlackRock, please visit @blackrock | Twitter: @blackrock | LinkedIn: www.linkedin.com/company/blackrock BlackRock is proud to be an Equal Opportunity Employer. We evaluate qualified applicants without regard to age, disability, family status, gender identity, race, religion, sex, sexual orientation and other protected attributes at law. Show more Show less
Posted 3 days ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Join us as a Data Engineering Lead This is an exciting opportunity to use your technical expertise to collaborate with colleagues and build effortless, digital first customer experiences You’ll be simplifying the bank through developing innovative data driven solutions, inspiring to be commercially successful through insight, and keeping our customers’ and the bank’s data safe and secure Participating actively in the data engineering community, you’ll deliver opportunities to support our strategic direction while building your network across the bank We’re recruiting for multiple roles across a range to levels, up to and including experienced managers What you'll do We’ll look to you to demonstrate technical and people leadership to drive value for the customer through modelling, sourcing and data transformation. You’ll be working closely with core technology and architecture teams to deliver strategic data solutions, while driving Agile and DevOps adoption in the delivery of data engineering, leading a team of data engineers. We’ll Also Expect You To Be Working with Data Scientists and Analytics Labs to translate analytical model code to well tested production ready code Helping to define common coding standards and model monitoring performance best practices Owning and delivering the automation of data engineering pipelines through the removal of manual stages Developing comprehensive knowledge of the bank’s data structures and metrics, advocating change where needed for product development Educating and embedding new data techniques into the business through role modelling, training and experiment design oversight Leading and delivering data engineering strategies to build a scalable data architecture and customer feature rich dataset for data scientists Leading and developing solutions for streaming data ingestion and transformations in line with streaming strategy The skills you'll need To be successful in this role, you’ll need to be an expert level programmer and data engineer with a qualification in Computer Science or Software Engineering. You’ll also need a strong understanding of data usage and dependencies with wider teams and the end customer, as well as extensive experience in extracting value and features from large scale data. We'll also expect you to have knowledge of of big data platforms like Snowflake, AWS Redshift, Postgres, MongoDB, Neo4J and Hadoop, along with good knowledge of cloud technologies such as Amazon Web Services, Google Cloud Platform and Microsoft Azure You’ll Also Demonstrate Knowledge of core computer science concepts such as common data structures and algorithms, profiling or optimisation An understanding of machine-learning, information retrieval or recommendation systems Good working knowledge of CICD tools Knowledge of programming languages in data engineering such as Python or PySpark, SQL, Java, and Scala An understanding of Apache Spark and ETL tools like Informatica PowerCenter, Informatica BDM or DEI, Stream Sets and Apache Airflow Knowledge of messaging, event or streaming technology such as Apache Kafka Experience of ETL technical design, automated data quality testing, QA and documentation, data warehousing, data modelling and data wrangling Extensive experience using RDMS, ETL pipelines, Python, Hadoop and SQL Show more Show less
Posted 3 days ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Join us as a Data Engineering Lead This is an exciting opportunity to use your technical expertise to collaborate with colleagues and build effortless, digital first customer experiences You’ll be simplifying the bank through developing innovative data driven solutions, inspiring to be commercially successful through insight, and keeping our customers’ and the bank’s data safe and secure Participating actively in the data engineering community, you’ll deliver opportunities to support our strategic direction while building your network across the bank We’re recruiting for multiple roles across a range to levels, up to and including experienced managers What you'll do We’ll look to you to demonstrate technical and people leadership to drive value for the customer through modelling, sourcing and data transformation. You’ll be working closely with core technology and architecture teams to deliver strategic data solutions, while driving Agile and DevOps adoption in the delivery of data engineering, leading a team of data engineers. We’ll Also Expect You To Be Working with Data Scientists and Analytics Labs to translate analytical model code to well tested production ready code Helping to define common coding standards and model monitoring performance best practices Owning and delivering the automation of data engineering pipelines through the removal of manual stages Developing comprehensive knowledge of the bank’s data structures and metrics, advocating change where needed for product development Educating and embedding new data techniques into the business through role modelling, training and experiment design oversight Leading and delivering data engineering strategies to build a scalable data architecture and customer feature rich dataset for data scientists Leading and developing solutions for streaming data ingestion and transformations in line with streaming strategy The skills you'll need To be successful in this role, you’ll need to be an expert level programmer and data engineer with a qualification in Computer Science or Software Engineering. You’ll also need a strong understanding of data usage and dependencies with wider teams and the end customer, as well as extensive experience in extracting value and features from large scale data. We'll also expect you to have knowledge of of big data platforms like Snowflake, AWS Redshift, Postgres, MongoDB, Neo4J and Hadoop, along with good knowledge of cloud technologies such as Amazon Web Services, Google Cloud Platform and Microsoft Azure You’ll Also Demonstrate Knowledge of core computer science concepts such as common data structures and algorithms, profiling or optimisation An understanding of machine-learning, information retrieval or recommendation systems Good working knowledge of CICD tools Knowledge of programming languages in data engineering such as Python or PySpark, SQL, Java, and Scala An understanding of Apache Spark and ETL tools like Informatica PowerCenter, Informatica BDM or DEI, Stream Sets and Apache Airflow Knowledge of messaging, event or streaming technology such as Apache Kafka Experience of ETL technical design, automated data quality testing, QA and documentation, data warehousing, data modelling and data wrangling Extensive experience using RDMS, ETL pipelines, Python, Hadoop and SQL Show more Show less
Posted 3 days ago
5.0 - 9.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Position-Azure Data Engineer Location- Pune Mandatory Skills- Azure Databricks, pyspark Experience-5 to 9 Years Notice Period- 0 to 30 days/ Immediately Joiner/ Serving Notice period Must have Experience: Strong design and data solutioning skills PySpark hands-on experience with complex transformations and large dataset handling experience Good command and hands-on experience in Python. Experience working with following concepts, packages, and tools, Object oriented and functional programming NumPy, Pandas, Matplotlib, requests, pytest Jupyter, PyCharm and IDLE Conda and Virtual Environment Working experience must with Hive, HBase or similar Azure Skills Must have working experience in Azure Data Lake, Azure Data Factory, Azure Databricks, Azure SQL Databases Azure DevOps Azure AD Integration, Service Principal, Pass-thru login etc. Networking – vnet, private links, service connections, etc. Integrations – Event grid, Service Bus etc. Database skills Oracle, Postgres, SQL Server – any one database experience Oracle PL/SQL or T-SQL experience Data modelling Thank you Show more Show less
Posted 3 days ago
10.0 years
2 - 6 Lacs
Hyderābād
Remote
Working with Us Challenging. Meaningful. Life-changing. Those aren't words that are usually associated with a job. But working at Bristol Myers Squibb is anything but usual. Here, uniquely interesting work happens every day, in every department. From optimizing a production line to the latest breakthroughs in cell therapy, this is work that transforms the lives of patients, and the careers of those who do it. You'll get the chance to grow and thrive through opportunities uncommon in scale and scope, alongside high-achieving teams. Take your career farther than you thought possible. Bristol Myers Squibb recognizes the importance of balance and flexibility in our work environment. We offer a wide variety of competitive benefits, services and programs that provide our employees with the resources to pursue their goals, both at work and in their personal lives. Read more: careers.bms.com/working-with-us . Job Description: This is an individual contributor role responsible for driving portfolio analytics, portfolio health checks and long-term portfolio revenue projections for our internal pipeline assets. The right candidate should have extensive experience in presenting portfolio assessments to senior leadership, identifying potential gaps and associated interventions needed, and hands-on expertise to build excel-based capabilities from scratch and exposure to SQL/ VBA/Python/ and other coding/ reporting platforms. Key Responsibilities: Portfolio Health Analysis: Conduct competitor concentration analysis by therapeutic indication using external datasets. Benchmark success rates versus modeled PTRS and conduct internal portfolio assessment vs. external perspective. Identify gaps, articulate findings for senior leadership for actionability and help junior members to understand broader context Asset Favorability Framework: Thought partner in developing and maintaining leadership dashboard capturing portfolio ranking across key variables of interest Long-Term Financial Planning (LTFP): Conduct early asset modeling for LTFP, including discrete models early-stage assets and conduct scenario analytics to derive range of possibilities given market dynamics Project and People Management: Operate as a project manager along with individual contributor role, managing competing priorities, work allocation, ensuring on-time delivery of projects, provide oversights and feedback to analysts, and participate in talent planning and year end reviews for the associates aligned to the team Collaborate with cross-functional teams to gather and analyze relevant data, market trends and historical performance Provide training, guidance and mentorship to junior analysts and team members as required Provide significant input into and communicate diplomatically regarding performance reviews, promotions and compensation actions for team members Strive to create standards for dataset usage through central repositories, cross team collaboration Skills and competencies: Strong analytical skills and experience in conducting portfolio analytics for pharmaceutical MNC SME in therapeutic area assessments, financial planning process and commercial forecasting for easy-stage assets Strong verbal/written skills, with the ability to effectively communicate with senior leadership Strong project management and interpersonal skills, with the ability to lead diverse teams and manage a heavy workload Strong creative problem-solving skills and business acumen, with the ability to identify key findings from disparate data sources to provide recommendations Ability to work in matrix organization Experience: We welcome a bachelor's or master's (MBA preferred; quantitative area) 10+ years pharmaceutical commercial analytics or forecasting experience Experience operating successfully in a complex organizational environment Experience interacting with senior management, understanding, anticipating, and fulfilling their insight/ analytical information requirements Hands-on expertise in pharmaceutical forecasting, portfolio and commercial analytics, deep understanding of therapeutic areas, expertise in modeling platforms, Advanced Excel & VBA , data manipulation software and visualization tools (e.g. Tableau,, Python, SQL, PowerBI, etc.) Expertise on few of the datasets (Visible Alpha, ProSight, Evaluate Pharma, DRG, Biodmedtracker, IQVIA, Pharmaprojects, Alpha sense) will be a plus If you come across a role that intrigues you but doesn't perfectly line up with your resume, we encourage you to apply anyway. You could be one step away from work that will transform your life and career. Uniquely Interesting Work, Life-changing Careers With a single vision as inspiring as Transforming patients' lives through science™ , every BMS employee plays an integral role in work that goes far beyond ordinary. Each of us is empowered to apply our individual talents and unique perspectives in a supportive culture, promoting global participation in clinical trials, while our shared values of passion, innovation, urgency, accountability, inclusion and integrity bring out the highest potential of each of our colleagues. On-site Protocol BMS has an occupancy structure that determines where an employee is required to conduct their work. This structure includes site-essential, site-by-design, field-based and remote-by-design jobs. The occupancy type that you are assigned is determined by the nature and responsibilities of your role: Site-essential roles require 100% of shifts onsite at your assigned facility. Site-by-design roles may be eligible for a hybrid work model with at least 50% onsite at your assigned facility. For these roles, onsite presence is considered an essential job function and is critical to collaboration, innovation, productivity, and a positive Company culture. For field-based and remote-by-design roles the ability to physically travel to visit customers, patients or business partners and to attend meetings on behalf of BMS as directed is an essential job function. BMS is dedicated to ensuring that people with disabilities can excel through a transparent recruitment process, reasonable workplace accommodations/adjustments and ongoing support in their roles. Applicants can request a reasonable workplace accommodation/adjustment prior to accepting a job offer. If you require reasonable accommodations/adjustments in completing this application, or in any part of the recruitment process, direct your inquiries to adastaffingsupport@bms.com . Visit careers.bms.com/ eeo -accessibility to access our complete Equal Employment Opportunity statement. BMS cares about your well-being and the well-being of our staff, customers, patients, and communities. As a result, the Company strongly recommends that all employees be fully vaccinated for Covid-19 and keep up to date with Covid-19 boosters. BMS will consider for employment qualified applicants with arrest and conviction records, pursuant to applicable laws in your area. If you live in or expect to work from Los Angeles County if hired for this position, please visit this page for important additional information: https://careers.bms.com/california-residents/ Any data processed in connection with role applications will be treated in accordance with applicable data privacy policies and regulations.
Posted 3 days ago
0 years
3 - 7 Lacs
Hyderābād
On-site
Ready to shape the future of work? At Genpact, we don’t just adapt to change—we drive it. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory , our industry-first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI , our breakthrough solutions tackle companies’ most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that’s shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions – we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . Inviting applications for the role of Principal Consultant-Power BI Developer! Responsibilities: • Working within a team to identify, design and implement a reporting/dashboarding user experience that is consistent and intuitive across environments, across report methods, defines security and meets usability and scalability best practices • Gathering query data from tables of industry cognitive model/data lake and build data models with BI tools • Apply requisite business logic using data transformation and DAX • Understanding on Power BI Data Modelling and various in-built functions • Knowledge on reporting sharing through Workspace/APP, Access management, Dataset Scheduling and Enterprise Gateway • Understanding of static and dynamic row level security • Ability to create wireframes based on user stories and Business requirement • Basic Understanding on ETL and Data Warehousing concepts • Conceptualizing and developing industry specific insights in forms dashboards/reports/analytical web application to deliver of Pilots/Solutions following best practices Qualifications we seek in you! Minimum Qualifications Graduate Why join Genpact? Be a transformation leader – Work at the cutting edge of AI, automation, and digital innovation Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career – Get hands-on experience, mentorship, and continuous learning opportunities Work with the best – Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Principal Consultant Primary Location India-Hyderabad Schedule Full-time Education Level Bachelor's / Graduation / Equivalent Job Posting Jun 13, 2025, 5:48:11 AM Unposting Date Ongoing Master Skills List Digital Job Category Full Time
Posted 3 days ago
3.0 years
0 Lacs
Dehradun, Uttarakhand, India
On-site
In accordance with the strategic editorial plan, this position is primarily responsible for maintaining Sage Data and supporting major data project initiatives. This position will work closely with key product stakeholders on the library editorial, product development, publishing technologies, marketing/sales teams. About Our Team The Editorial Processing team at Sage is a dynamic and collaborative group dedicated to curating, maintaining, and enhancing high-quality digital resources for the academic community. We are passionate about data integrity, user experience, and delivering valuable insights through innovative data products like Sage Data. Working closely with stakeholders across editorial, technology, marketing, and product development, our team drives initiatives that ensure our resources meet the evolving needs of researchers, students, and librarians. We combine editorial excellence with technical acumen and project management skills, fostering an environment where detail-oriented, analytical, and creative professionals thrive. Joining our team means becoming part of a mission-driven culture that values precision, innovation, and collaboration, where every voice is heard and every contribution counts toward advancing knowledge and accessibility in the academic world. What is your team’s key role in the business? Our team plays a vital role in ensuring the quality, accuracy, and consistency of published content across all Learning Resource platforms. We act as the bridge between content creation and publication, managing the end-to-end editorial workflow with precision and efficiency. Our team is responsible for reviewing, formatting, and processing submissions to meet editorial standards and publication guidelines. From initial manuscript handling to final approvals, we ensure each piece meets rigorous quality benchmarks. With a strong focus on detail, timeliness, and consistency, the Editorial Processing Team supports the broader mission of delivering trusted, high-quality content to our audience. Our work may be behind the scenes, but it is foundational to the credibility and success of our publications. What other departments do you work closely with? Publishing Technologies / IT – to support content ingestion, interface functionality, and technical documentation. Product Development – to align editorial work with product strategy and feature enhancements. Sales and Marketing – to develop support materials and communicate product value to library customers and end users. Content Teams – to manage the ongoing acquisition, updating, and quality control of datasets. Customer Support / User Services – to ensure a seamless experience for users and address feedback or technical issues related to content. Key Accountabilities The essential job functions include, but are not limited to, the following for Sage data products: With Content team contribute to the content ingestion and update process for Sage data products. Create dataset metadata, ensuring accuracy and timeliness. Perform quality assurance checks on data content and content behavior on the Sage Data interface. Create and maintain technical documentation on the collection and ingest of Sage Data datasets from original sources. Contribute to development and maintenance of editorially created data product end user support materials. Work with the Executive Editor to assist Sales and Marketing in creating necessary support materials. Contribute to decision making about product functionality and content acquisitions. Skills, Qualifications & Experience Any combination equivalent t, but not limited to, the following: At least 3 years of publishing experience, preferably in developing digital resources, for the academic library market OR at least 3 years' experience in technical or digital services for a library, library consortium, archives or museum. Proficient computer and database skills; competency in the Microsoft 365 suite of software. Language skills, reasoning ability and analytical aptitude Exceptional reading and comprehension skills, with an ability to distil and communicate dense information concisely in English. Detail oriented with strong copyediting, proofreading, and quality assurance skills Effective listening, verbal and written communication skills Comfortable with technology Ability to foster effective relationships with marketing, IT, and product stakeholders. Ability to set and follow through on priorities Ability to plan and manage multiple projects and effectively multi-task Ability to effectively manage time to meet deadlines and work professionally under pressure Ability to maintain confidentiality and work with diplomacy Ability to reason and problem solve Proficient analytical and mathematical skills Effective public speaking and/or presenting to individuals and groups Diversity, Equity, and Inclusion At Sage we are committed to building a diverse and inclusive team that is representative of all sections of society and to sustaining a culture that celebrates difference, encourages authenticity, and creates a deep sense of belonging. We welcome applications from all members of society irrespective of age, disability, sex or gender identity, sexual orientation, color, race, nationality, ethnic or national origin, religion or belief as creating value through diversity is what makes us strong. Sage is a global academic publisher of books, journals, and library resources with a growing range of technologies to enable discovery, access, and engagement. Our mission is building bridges to knowledge — supporting the development of ideas through the research process to scholarship that is certified, taught, and applied. Learn about Sage | About our companies | Open editor positions Sage is committed to the full inclusion of all qualified applicants. Accommodations will be made for any part of the interview process. Show more Show less
Posted 3 days ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Description Description Job Objective: To contribute with strong problem-solving skills and process orientation on the projects independently. To help the team with the required training and mentor new joiners. Designation: Senior Business Analyst Job Location: Bangalore/Chennai Type of employment: Permanent Roles & Responsibilities: Provide insights through data analysis and visualization for small to large datasets Ability to translate a business questions into analytical problem to develop business r ules, process flow and methodology for analysis Ability to summarize the analysis using basic statistical methods Work with team to execute Adhoc/ Regular reporting projects Requirements: 2+ years of past professional experience, preferably in the pharma / life sciences domain. Hands on experience in data analytics. Technical Skills: Must – Advanced Excel, SQL or other database query language Good to have: Data analysis tools: Python or PySpark Data visualization tools: Tableau, Qlik or Power BI Automation tools: VBA or Google Apps Scripts No/low code platforms: Dataiku Domain and Data skillset: Good to have – One or more of below dataset knowledge: Patient level data analytics (RWD Data Lake, Optum/ DRG Claims, IQVIA APLD, IQVIA LAAD, Symphony claims, Komodo claims) Field / Sales Analytics (Ad-hoc field analytics, field operations (sizing, structuring, targeting & segmentation) Marketing analytics (Market assessment, Forecasting, competitive intelligence) Experience in analyzing IQVIA/IMS data (patient insights, MIDAS) Additional Skills: Ability to work independently across teams Passion for solving challenging analytical problems Ability to assess a problem quickly, qualitatively, and quantitatively Ability to work productively with team members, identify and resolve tough issues in a collaborative manner Should have excellent communication skills Company Description: Trinity is a life science consulting firm, founded in 1996. We are a trusted strategic partner that provides evidence-based solutions for life science corporations around the world. With over 25 years of experience, we are committed to solving our clients' most challenging problems through exceptional levels of service, powerful tools, and data driven insights. Globally we have 12 offices and 270+ life sciences customers with 1200+ employees worldwide. We started our India office in 2017 and we have around 350+ employees in India, which we are further keen on growing exponentially. Qualifications B.E graduates are preferable Show more Show less
Posted 3 days ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Description Description Job Objective: To contribute with strong problem-solving skills and process orientation on the projects independently. To help the team with the required training and mentor new joiners. Designation: Senior Business Analyst Job Location: Bangalore/Chennai Type of employment: Permanent Roles & Responsibilities: Provide insights through data analysis and visualization for small to large datasets Ability to translate a business questions into analytical problem to develop business r ules, process flow and methodology for analysis Ability to summarize the analysis using basic statistical methods Work with team to execute Adhoc/ Regular reporting projects Requirements: 2+ years of professional experience is required Must be well verse with MS Excel, Word and PowerPoint Technically: Must - Experience with working on database query language (SQL) Good to have - Python, VBA, any visualization tool (Tableau, Qlik, Power BI) etc. Hands on experience in data analytics. Dataset Knowledge: Must - Patient level data analytics (IQVIA LAAD, Symphony claims, Komodo claims) Good to have: Patient level data analytics (RWD Data Lake, Optum/ DRG Claims), IQVIA Sales data analytics Additional Skills: Ability to work independently across teams Passion for solving challenging analytical problems Ability to assess a problem quickly, qualitatively, and quantitatively Ability to work productively with team members, identify and resolve tough issues in a collaborative manner Should have good communication skills Company Description: Trinity is a life science consulting firm, founded in 1996. We are a trusted strategic partner that provides evidence-based solutions for life science corporations around the world. With over 25 years of experience, we are committed to solving our clients' most challenging problems through exceptional levels of service, powerful tools, and data driven insights. Globally we have 12 offices and 270+ life sciences customers with 1200+ employees worldwide. We started our India office in 2017 and we have around 350+ employees in India, which we are further keen on growing exponentially. Qualifications B.E graduates are preferable Show more Show less
Posted 3 days ago
0 years
0 Lacs
Nagpur, Maharashtra, India
On-site
Position : AI/ML Engineer Location : Nagpur ( work from Office strictly) Salary : Negotiable on current package We are looking for an expert in machine learning to help us extract value from our data. You will lead all the processes from data collection, cleaning, and preprocessing, to training models and deploying them to production. The ideal candidate will be passionate about artificial intelligence and stay up-to-date with the latest developments in the field. Roles and Responsibilities: Understanding business objectives and developing models that help to achieve them, along with metrics to track their progress Managing available resources such as hardware, data, and personnel so that deadlines are met Analyzing the ML algorithms that could be used to solve a given problem and ranking them by their success probability Exploring and visualizing data to gain an understanding of it, then identifying differences in data distribution that could affect performance when deploying the model in the real world Verifying data quality, and/or ensuring it via data cleaning Supervising the data acquisition process if more data is needed Finding available datasets online that could be used for training Defining validation strategies Defining the preprocessing or feature engineering to be done on a given dataset Defining data augmentation pipelines Training models and tuning their hyperparameters Analyzing the errors of the model and designing strategies to overcome them Deploying models to production Skills : Proficiency with a deep learning framework such as TensorFlow or Keras Proficiency with Python and basic libraries for machine learning such as scikit-learn and pandas Expertise in visualizing and manipulating big datasets Proficiency with OpenCV Familiarity with Linux Ability to select hardware to run an ML model with the required latency Show more Show less
Posted 3 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2