Jobs
Interviews

3678 Redshift Jobs - Page 18

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Mumbai, Maharashtra

On-site

DESCRIPTION At Amazon Ads, we sit at the intersection of Advertising, Media and eCommerce. With millions of customers visiting us every day to find, discover, and buy products, we believe that advertising, when done well, can enhance the value of the customer experience and generate a positive ROI for our advertising partners. We strive to make advertising relevant so that customers welcome it - across Amazon’s ecosystem of mobile and desktop websites, proprietary devices, and the Amazon Advertising business. If you’re interested in innovative advertising solutions with a relentless focus on the customer, you’ve come to the right place! As a Business Analyst in the Amazon Ads team, you will be responsible for analyzing advertising performance data, providing insights to drive business decisions, and supporting the growth of Amazon's advertising business in India. We are seeking an experienced and highly skilled Reporting & Automation Specialist to lead our data analytics and reporting efforts. This role will be responsible for overseeing complex data flows, developing advanced reporting solutions, and driving data-driven decision-making across Business, Finance, and Leadership teams. The ideal candidate will have a deep understanding of business intelligence tools, advanced SQL skills, and the ability to translate complex data into actionable insights. Key job responsibilities Lead the development and implementation of sophisticated reporting solutions, integrating advertising data from MADS, Hercules, Spektr with retail platform datasets to provide comprehensive business intelligence. Design and deliver high-impact reports and dashboards for Business, Finance, and Leadership teams, ensuring data accuracy, relevance, and alignment with strategic objectives. Serve as the senior point of contact for complex reporting-related queries, providing expert guidance and insights to stakeholders across the organization. Drive continuous improvement initiatives to optimize reporting processes, including the implementation of advanced automation techniques and cutting-edge BI tools. L Lead the development of complex SQL queries and data models to support in-depth analysis and insight generation for business teams. Architect and implement sophisticated reporting and analytics solutions using Amazon QuickSight, Excel macros, and other advanced BI tools. Collaborate with cross-functional teams to elevate the overall data analytics capabilities of the organization. BASIC QUALIFICATIONS 5+ years of Excel (including VBA, pivot tables, array functions, power pivots, etc.) and data visualization tools such as Tableau experience Bachelor's degree or equivalent Experience defining requirements and using data and metrics to draw business insights Experience with Excel Experience with SQL Proven track record of implementing large-scale process improvements through automation and advanced analytics Expert-level proficiency in SQL, including experience with complex queries and data modeling Demonstrated ability to manage multiple high-priority reporting cycles and projects simultaneously Exceptional attention to detail and ability to maintain accuracy when working with large, complex datasets PREFERRED QUALIFICATIONS Advanced certifications in relevant BI tools (e.g., Amazon QuickSight, Tableau, Power BI) Experience with cloud-based data warehousing solutions (e.g., Amazon Redshift, Snowflake) Proficiency in programming languages such as Python or R for data analysis and automation Knowledge of machine learning and predictive analytics techniques Experience working in e-commerce or digital advertising industries Strong presentation skills with the ability to communicate complex data insights to both technical and non-technical audiences Track record of driving data-driven decision-making at senior leadership levels Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Company Description WNS (Holdings) Limited (NYSE: WNS), is a leading Business Process Management (BPM) company. We combine our deep industry knowledge with technology and analytics expertise to co-create innovative, digital-led transformational solutions with clients across 10 industries. We enable businesses in Travel, Insurance, Banking and Financial Services, Manufacturing, Retail and Consumer Packaged Goods, Shipping and Logistics, Healthcare, and Utilities to re-imagine their digital future and transform their outcomes with operational excellence.We deliver an entire spectrum of BPM services in finance and accounting, procurement, customer interaction services and human resources leveraging collaborative models that are tailored to address the unique business challenges of each client. We co-create and execute the future vision of 400+ clients with the help of our 44,000+ employees. Job Description We are seeking a skilled Data Analyst to join our team. The ideal candidate will have over 5 years of relevant experience with hands-on expertise in SQL programming, and creating insightful visualizations. Knowledge of AWS Redshift or similar databases is essential.Key Responsibilities:Analyse complex data sets to identify trends, patterns, and insights.Develop and implement statistical and machine learning models to solve business problems.Write efficient and optimized SQL queries for data extraction and manipulation.Create interactive and informative visualizations to present data insights.Collaborate with cross-functional teams to understand business requirements and provide analytical solutions.Maintain and optimize existing data pipelines and workflows.Communicate complex data findings clearly and effectively to stakeholders.Required Skills:5+ years of relevant experience in data analysis.Hands-on experience with SQL programming.Proven experience in building statistical and machine learning models.Proficiency in building visualizations using tools like Tableau, Power BI, or similar.Knowledge of AWS Redshift or similar database technologies.Experience working with Databricks.Preferred Skills / Not Mandatory Familiar with R / Python (nice to have)Other Requirements:Excellent communication skills.Strong business analysis capabilities.High attention to detail.Ability to work effectively as a team player or independently as an Individual Contributor (IC). Qualifications Graduate

Posted 1 week ago

Apply

4.0 years

0 Lacs

Thiruvananthapuram, Kerala, India

Remote

About The Company Armada is an edge computing startup that provides computing infrastructure to remote areas where connectivity and cloud infrastructure is limited, as well as areas where data needs to be processed locally for real-time analytics and AI at the edge. We’re looking to bring on the most brilliant minds to help further our mission of bridging the digital divide with advanced technology infrastructure that can be rapidly deployed anywhere . About The Role We are looking for a detail-oriented and technically skilled BI Engineer to design, build, and maintain robust data pipelines and visualization tools that empower data-driven decision-making across the organization. The ideal candidate will work closely with stakeholders to translate business needs into actionable insights by developing and optimizing BI solutions. Location. This role is office-based at our Trivandrum, Kerala office. What You'll Do (Key Responsibilities) Design, develop, and maintain scalable ETL (Extract, Transform, Load) pipelines to support data integration from multiple sources. Build and optimize data models and data warehouses for business reporting and analysis. Develop dashboards, reports, and data visualizations using BI tools (e.g., Power BI, Tableau, Looker, etc.). Collaborate with data analysts, data scientists, and business stakeholders to understand reporting needs and deliver effective solutions. Ensure data accuracy, consistency, and integrity across reporting systems. Perform data validation, cleansing, and transformation as necessary. Identify opportunities to automate processes and improve reporting efficiency. Monitor BI tools and infrastructure performance, and troubleshoot issues as needed. Stay up-to-date with emerging BI technologies and best practices. Required Qualifications Bachelor’s degree in Computer Science, Information Systems, Data Science, or a related field. 2–4 years of experience as a BI Engineer, Data Engineer, or similar role. Proficiency in SQL and experience with data modeling and data warehousing (e.g., Snowflake, Redshift, BigQuery). Experience with BI and data visualization tools (e.g., Power BI, Tableau, Qlik, Looker). Strong understanding of ETL processes and data pipeline design. Excellent problem-solving skills and attention to detail. Preferred Experience with Python, R, or other scripting languages for data manipulation. Familiarity with cloud platforms (e.g., AWS, Azure, Google Cloud Platform). Knowledge of version control (e.g., Git) and CI/CD practices. Experience with APIs, data governance, and data cataloging tools. Compensation We offer a competitive base salary along with equity options, providing an opportunity to share in the success and growth of Armada. You're a Great Fit if You're A go-getter with a growth mindset. You're intellectually curious, have strong business acumen, and actively seek opportunities to build relevant skills and knowledge A detail-oriented problem-solver. You can independently gather information, solve problems efficiently, and deliver results with a "get-it-done" attitude Thrive in a fast-paced environment. You're energized by an entrepreneurial spirit, capable of working quickly, and excited to contribute to a growing company A collaborative team player. You focus on business success and are motivated by team accomplishment vs personal agenda Highly organized and results-driven. Strong prioritization skills and a dedicated work ethic are essential for you Equal Opportunity Statement At Armada, we are committed to fostering a work environment where everyone is given equal opportunities to thrive. As an equal opportunity employer, we strictly prohibit discrimination or harassment based on race, color, gender, religion, sexual orientation, national origin, disability, genetic information, pregnancy, or any other characteristic protected by law. This policy applies to all employment decisions, including hiring, promotions, and compensation. Our hiring is guided by qualifications, merit, and the business needs at the time.

Posted 1 week ago

Apply

2.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Description Want to join the Earth’s most customer centric company? Do you like to dive deep to understand problems? Are you someone who likes to challenge Status Quo? Do you strive to excel at goals assigned to you? If yes, we have opportunities for you. Global Operations – Artificial Intelligence (GO-AI) at Amazon is looking to hire candidates who can excel in a fast-paced dynamic environment. Are you somebody that likes to use and analyze big data to drive business decisions? Do you enjoy converting data into insights that will be used to enhance customer decisions worldwide for business leaders? Do you want to be part of the data team which measures the pulse of innovative machine vision-based projects? If your answer is yes, join our team. GO-AI is looking for a motivated individual with strong skills and experience in resource utilization planning, process optimization and execution of scalable and robust operational mechanisms, to join the GO-AI Ops DnA team. In this position you will be responsible for supporting our sites to build solutions for the rapidly expanding GO-AI team. The role requires the ability to work with a variety of key stakeholders across job functions with multiple sites. We are looking for an entrepreneurial and analytical program manager, who is passionate about their work, understands how to manage service levels across multiple skills/programs, and who is willing to move fast and experiment often. Key job responsibilities Key Responsibilities Include Ability to maintain and refine straightforward ETL and write secure, stable, testable, maintainable code with minimal defects and automate manual processes. Proficiency in one or more industry analytics visualization tools (e.g. Excel, Tableau/Quicksight/PowerBI) and, as needed, statistical methods (e.g. t-test, Chi-squared) to deliver actionable insights to stakeholders. Building and owning small to mid-size BI solutions with high accuracy and on time delivery using data sets, queries, reports, dashboards, analyses or components of larger solutions to answer straightforward business questions with data incorporating business intelligence best practices, data management fundamentals, and analysis principles. Good understanding of the relevant data lineage: including sources of data; how metrics are aggregated; and how the resulting business intelligence is consumed, interpreted and acted upon by the business where the end product enables effective, data-driven business decisions. Having high responsibility for the code, queries, reports and analyses that are inherited or produced and having analyses and code reviewed periodically. Effective partnering with peer BIEs and others in your team to troubleshoot, research root causes, propose solutions, by either take ownership for their resolution or ensure a clear hand-off to the right owner. About The Team The Global Operations – Artificial Intelligence (GO-AI) team is an initiative, which remotely handles exceptions in the Amazon Robotic Fulfillment Centers Globally. GO-AI seeks to complement automated vision based decision-making technologies by providing remote human support for the subset of tasks which require higher cognitive ability and cannot be processed through automated decision making with high confidence. This team provides end-to-end solutions through inbuilt competencies of Operations and strong central specialized teams to deliver programs at Amazon scale. It is operating multiple programs including Nike IDS, Proteus, Sparrow and other new initiatives in partnership with global technology and operations teams. Basic Qualifications 2+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience building and maintaining basic data artifacts (e.g., ETL, data models, queries) Experience with one or more industry analytics visualization tools (e.g. Excel, Tableau, QuickSight, MicroStrategy, PowerBI) and statistical methods (e.g. t-test, Chi-squared) Experience with scripting language (e.g., Python, Java, or R) Experience applying basic statistical methods (e.g. regression) to difficult business problems Preferred Qualifications Master's degree, or Advanced technical degree Experience with statistical analysis, co-relation analysis Knowledge of how to improve code quality and optimizes BI processes (e.g. speed, cost, reliability) Excellence in technical communication with peers, partners, and non-technical cohorts Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 16 SEZ - H83 Job ID: A3041495

Posted 1 week ago

Apply

2.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About Flutter Entertainment Flutter Entertainment is the world’s largest sports betting and iGaming operator with 13.9 million average monthly players worldwide and an annual revenue of $14Bn in 2024. We have a portfolio of iconic brands, including Paddy Power, Betfair, FanDuel, PokerStars, Junglee Games and Sportsbet. Flutter Entertainment is listed on both the New York Stock Exchange (NYSE) and the London Stock Exchange (LSE). In 2024, we were recognized in TIME’s 100 Most Influential Companies under the 'Pioneers' category—a testament to our innovation and impact. Our ambition is to transform global gaming and betting to deliver long-term growth and a positive, sustainable future for our sector. Together, we are Changing the Game! Working at Flutter is a chance to work with a growing portfolio of brands across a range of opportunities. We will support you every step of the way to help you grow. Just like our brands, we ensure our people have everything they need to succeed. Flutter Entertainment India Our Hyderabad office, located in one of India’s premier technology parks is the Global Capability Center for Flutter Entertainment. A center of expertise and innovation, this hub is now home to over 10 00+ talented colleagues working across Customer Service Operations, Data and Technology, Finance Operations, HR Operations, Procurement Operations, and other key enabling functions. We are committed to crafting impactful solutions for all our brands and divisions to power Flutter's incredible growth and global impact. With the scale of a leader and the mindset of a challenger, we’re dedicated to creating a brighter future for our customers, colleagues, and communities. Overview Of The Role We are seeking a technically skilled Regulatory Data Analyst to join our dynamic Data & Analytics (ODA) department in Hyderabad, India. As a globally recognized and highly regulated brand, we are deeply committed to delivering accurate reporting and critical business insights that push the boundaries of our understanding through innovation. You'll be joining a team of exceptional data professionals with a strong command over analytical tools and statistical techniques . You’ll help shape the future of online gaming by leveraging robust technical capabilities to ensure regulatory compliance, support risk management, and strengthen business operations through advanced data solutions. You shall work with large, complex datasets—interrogating and manipulating data using advanced SQL and Python, building scalable dashboards, and developing automation pipelines. Beyond in-depth analysis, you’ll create regulatory reports and visualizations for diverse audiences, and proactively identify areas to enhance efficiency and compliance through technical solutions. KEY RESPONSIBILITES Query data from various database environments (e.g., DB2 , MS SQL Server , Azure ) using Advanced SQL techniques Perform data processing and statistical analysis using tools such as Python, R and Excel Translate regulatory data requirements into structured analysis using robust scripting and automation Design and build interactive dashboards and reporting pipelines using Power BI, Tableau, or MicroStrategy to highlight key metrics and regulatory KPIs Develop compelling data visualizations and executive summaries to communicate complex insights clearly to technical and non-technical stakeholders alike Collaborate with global business stakeholders to interpret jurisdiction-specific regulations and provide technically sound, data-driven insights Recommend enhancements to regulatory processes through data modelling , root cause analysis , and applied statistical techniques (e.g., regression, hypothesis testing) Ensure data quality, governance, and lineage in all deliverables, applying technical rigor and precision To Excel In This Role, You Will Need 2 to 4 years of relevant work experience as a Data Analyst or in a role focused in regulatory or compliance-based analytics Bachelor's degree in a quantitative or technical discipline (e.g, Mathematics, Statistics, Economics, or Computer Science) Proficiency in SQL with the ability to write and optimize complex queries from scratch Strong programming skills in Python (or R) for automation, data wrangling, and statistical analysis Experience using data visualization and BI tools (MicroStrategy, Tableau, PowerBI) to create dynamic dashboards and visual narratives Knowledge of data warehousing environments like Microsoft SQL Server Management Studio or Amazon RedShift Ability to apply statistical methods such as time series analysis, regression, and causal inference to solve regulatory and business problems Benefits We Offer Access to Learnerbly, Udemy, and a Self-Development Fund for upskilling. Career growth through Internal Mobility Programs. Comprehensive Health Insurance for you and dependents. Well-Being Fund and 24/7 Assistance Program for holistic wellness. Hybrid Model: 2 office days/week with flexible leave policies, including maternity, paternity, and sabbaticals. Free Meals, Cab Allowance, and a Home Office Setup Allowance. Employer PF Contribution, gratuity, Personal Accident & Life Insurance. Sharesave Plan to purchase discounted company shares. Volunteering Leave and Team Events to build connections. Recognition through the Kudos Platform and Referral Rewards. Why Choose Us Flutter is an equal-opportunity employer and values the unique perspectives and experiences that everyone brings. Our message to colleagues and stakeholders is clear: everyone is welcome, and every voice matters. We have ambitious growth plans and goals for the future. Here's an opportunity for you to play a pivotal role in shaping the future of Flutter Entertainment India

Posted 1 week ago

Apply

0 years

0 Lacs

Greater Kolkata Area

On-site

AWS Data Engineer Design and optimize scalable, cloud-native data pipelines on AWS. Build and manage ETL workflows using SQL, Python, and modern orchestration tools. Work with both structured and semi-structured data formats, including JSON, Parquet, and Avro. Collaborate with BI and analytics teams to deliver data models that power dashboards and reporting tools. Ensure pipeline performance, cost-efficiency, and data security within a governed AWS environment. Support query performance tuning, data transformations, and automation. Participate in real-time data streaming and event-driven architectures using AWS-native Youll Use : Strong Knowledge of AWS Glue, Athena, Lambda, Step Functions Knowledge of Redshift, S3, EC2, EMR, Kinesis Hands on experience in SQL (query tuning & scripting) Hands on experience in Python, DBT, Airflow, Add-ons : CI/CD for data pipelines using CodePipeline / CodeBuild Experience with real-time/streaming architectures Strong problem-solving and cloud architecture skills (ref:hirist.tech)

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

maharashtra

On-site

As a Senior Digital Solutions Consultant at Worley, you will have the opportunity to work on the world's most complex projects and be a part of a collaborative and inclusive team. Your role will be varied and challenging, allowing you to contribute to innovative solutions that drive sustainability in projects. Worley, a global professional services company specializing in energy, chemicals, and resources, is at the forefront of bridging the transition to more sustainable energy sources. By partnering with customers, Worley delivers integrated data-centric solutions throughout the lifecycle of assets, from consulting and engineering to decommissioning and remediation. As part of your role, you will be responsible for developing and implementing data pipelines to ingest and collect data from various sources into a centralized data platform. You will work on optimizing and troubleshooting AWS Glue jobs for performance and reliability, using Python and PySpark to handle large data volumes efficiently. Collaboration with data architects to design and implement data models that meet business requirements will be essential. Additionally, you will create and maintain ETL processes using Airflow, Python, and PySpark to move and transform data between systems. Monitoring data pipeline performance, managing and optimizing databases, and proficiency in Infrastructure as Code tools will also be key aspects of your responsibilities. Your expertise in event-driven integrations, batch-based, and API-led data integrations will be valuable, along with proficiency in CICD pipelines. To excel in this role, you should have over 5 years of experience in developing integration projects in an agile or waterfall-based project environment. Proficiency in Python, PySpark, SQL programming, and hands-on experience with AWS services like Glue, Airflow, Dynamo DB, Redshift, and S3 buckets will be required. Familiarity with CI/CD pipelines, web service development, and a degree in Computer Science or related fields are desirable qualifications. At Worley, we are committed to creating a values-inspired culture that fosters innovation, belonging, and connection. We believe in reskilling our workforce and supporting their transition to become experts in low carbon energy infrastructure and technology. Join us to unlock your potential, explore diverse opportunities, and be part of delivering sustainable change.,

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. What You Will Do Let’s do this. Let’s change the world. In this vital role you will play a key role in successfully leading the engagement model between Amgen's Technology organization and Global Commercial Operations. Collaborate with R&D Business SMEs, Data Engineers, Data Scientists and Product Managers to lead business analysis activities, ensuring alignment with engineering and product goals on the Data & AI Product Teams Become a R&D domain authority in Data & AI technology capabilities by researching, deploying, and sustaining features built according to Amgen’s Quality System Lead the voice of the customer assessment to define business processes and product needs Work with Product Managers and customers to define scope and value for new developments Collaborate with Engineering and Product Management to prioritize release scopes and refine the Product backlog Ensure non-functional requirements are included and prioritized in the Product and Release Backlogs Facilitate the breakdown of Epics into Features and Sprint-Sized User Stories and participate in backlog reviews with the development team Clearly express features in User Stories/requirements so all team members and partners understand how they fit into the product backlog Ensure Acceptance Criteria and Definition of Done are well-defined Work closely with Business SME’s, Data Scientists, ML Engineers to understand the requirements around Data product requirements, KPI’s etc. Analyzing the source systems and create the STTM documents. Develop and implement effective product demonstrations for internal and external partners Maintain accurate documentation of configurations, processes, and changes Understand end-to-end data pipeline design and dataflow Apply knowledge of data structures to diagnose data issues for resolution by data engineering team What We Expect Of You We are all different, yet we all use our unique contributions to serve patients. We are seeking a highly skilled and experienced Principal IS Business Analyst with a passion for innovation and a collaborative working style that partners effectively with business and technology leaders with these qualifications. Basic Qualifications: Master’s degree with 8 - 12 years of experience in R&D Informatics Bachelor’s degree with 10 - 14 years of experience in R&D Informatics Diploma with 14 - 18 years of experience in R&D Informatics Mandatory work experience in acting as a business analyst in DWH, Data product building, BI & Analytics Applications. Experience in Analyzing the requirements of BI, AI & Analytics applications and working with Data Source SME, Data Owners to identify the data sources and data flows Experience with writing user requirements and acceptance criteria Affinity to work in a DevOps environment and Agile mind set Ability to work in a team environment, effectively interacting with others Ability to meet deadlines and schedules and be accountable Preferred Qualifications: Must-Have Skills Excellent problem-solving skills and a passion for solving complex challenges in for AI-driven technologies Experience with Agile software development methodologies (Scrum) Superb communication skills and the ability to work with senior leadership with confidence and clarity Has experience with writing user requirements and acceptance criteria in agile project management systems such as JIRA Experience in managing product features for PI planning and developing product roadmaps and user journeys Good-to-Have Skills: Demonstrated expertise in data and analytics and related technology concepts Understanding of data and analytics software systems strategy, governance, and infrastructure Familiarity with low-code, no-code test automation software Technical thought leadership Able to communicate technical or complex subject matters in business terms Jira Align experience Experience of DevOps, Continuous Integration, and Continuous Delivery methodology Soft Skills: Able to work under minimal supervision Excellent analytical and gap/fit assessment skills Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills Technical Skills: Experience with cloud-based data technologies (e.g., Databricks, Redshift, S3 buckets) AWS (similar cloud-based platforms) Experience with design patterns, data structures, test-driven development Knowledge of NLP techniques for text analysis and sentiment analysis What You Can Expect Of Us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

chennai, tamil nadu

On-site

You will be responsible for fetching and transforming data from various systems, conducting in-depth analyses to identify gaps, opportunities, and insights, and providing recommendations that support strategic business decisions. Your key responsibilities will include data extraction and transformation, data analysis and insight generation, visualization and reporting, collaboration with cross-functional teams, and building strong working relationships with external stakeholders. You will report to the VP Business Growth and work closely with clients. To excel in this role, you should have proficiency in SQL for data querying and Python for data manipulation and transformation. Experience with data engineering tools such as Spark and Kafka, as well as orchestration tools like Apache NiFi and Apache Airflow, will be essential for ETL processes and workflow automation. Expertise in data visualization tools such as Tableau and Power BI, along with strong analytical skills including statistical techniques, will be crucial. In addition to technical skills, you should possess soft skills such as flexibility, excellent communication skills, business acumen, and the ability to work independently as well as within a team. Your academic qualifications should include a Bachelors or Masters degree in Applied Mathematics, Management Science, Data Science, Statistics, Econometrics, or Engineering. Extensive experience in Data Lake architecture, building data pipelines using AWS services, proficiency in Python and SQL, and experience in the banking domain will be advantageous. Overall, you should demonstrate high motivation, a good work ethic, maturity, personal initiative, and strong oral and written communication skills to succeed in this role.,

Posted 1 week ago

Apply

4.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

As a Consultant, you will work closely with internal and external stakeholders and deliver high quality analytics solutions to real-world Pharma commercial organization’s business problems. You will bring deep Pharma / Healthcare domain expertise and use cloud data tools to help solve complex problems Key Responsibilities Collaborate with internal teams and client stakeholders to deliver Business Intelligence solutions that support key decision-making for the Commercial function of Pharma organizations. Leverage deep domain knowledge of pharmaceutical sales, claims, and secondary data to structure and optimize BI reporting frameworks. Develop, maintain, and optimize interactive dashboards and visualizations using Tableau (primary), along with other BI tools like Power BI or Qlik, to enable data-driven insights. Translate business requirements into effective data visualizations and actionable reporting solutions tailored to end-user needs. Write complex SQL queries and work with large datasets housed in Data Lakes or Data Warehouses to extract, transform, and present data efficiently. Conduct data validation, QA checks, and troubleshoot stakeholder-reported issues by performing root cause analysis and implementing solutions. Collaborate with data engineering teams to define data models, KPIs, and automate data pipelines feeding BI tools. Manage ad-hoc and recurring reporting needs, ensuring accuracy, timeliness, and consistency of data outputs. Drive process improvements in dashboard development, data governance, and reporting workflows. Document dashboard specifications, data definitions, and maintain data dictionaries. Stay up to date with industry trends in BI tools, visualization of best practices and emerging data sources in the healthcare and pharma space. Prioritize and manage multiple BI project requests in a fast-paced, dynamic environment. Qualifications 2–4 years of experience in BI development, reporting, or data visualization, preferably in the pharmaceutical or life sciences domain. Strong hands-on experience building dashboards using Tableau (preferred), Power BI, or Qlik. Advanced SQL skills for querying and transforming data across complex data models. Familiarity with pharma data such as Sales, Claims, and secondary market data is a strong plus. Experience in data profiling, cleansing, and standardization techniques. Ability to translate business questions into effective visual analytics. Strong communication skills to interact with stakeholders and present data insights clearly. Self-driven, detail-oriented, and comfortable working with minimal supervision in a team-oriented environment. Exposure to data warehousing concepts and cloud data platforms (e.g., Snowflake, Redshift, or BigQuery) is an advantage. Education Bachelor’s or Master’s Degree (computer science, engineering or other technical disciplines)

Posted 1 week ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Chryselys Overview Chryselys is a Pharma Analytics & Business consulting company that delivers data-driven insights leveraging AI-powered, cloud-native platforms to achieve high-impact transformations. Chryselys was founded in the heart of Silicon Valley in November 2019 with the vision of delivering high-value business consulting, solutions, and services to clients in the healthcare and life sciences space. We are trusted partners for organizations that seek to achieve high-impact transformations and reach their higher-purpose mission. Chryselys India supports our global clients to achieve high-impact transformations and reach their higher-purpose mission. Our India team focuses on development of Commercial Insights platform and supports client projects. Role Summary As a Consultant, you will work closely with internal and external stakeholders and deliver high quality analytics solutions to real-world Pharma commercial organization’s business problems. You will bring deep Pharma / Healthcare domain expertise and use cloud data tools to help solve complex problems Key Responsibilities Collaborate with internal teams and client stakeholders to deliver Business Intelligence solutions that support key decision-making for the Commercial function of Pharma organizations. Leverage deep domain knowledge of pharmaceutical sales, claims, and secondary data to structure and optimize BI reporting frameworks. Develop, maintain, and optimize interactive dashboards and visualizations using Tableau (primary), along with other BI tools like Power BI or Qlik, to enable data-driven insights. Translate business requirements into effective data visualizations and actionable reporting solutions tailored to end-user needs. Write complex SQL queries and work with large datasets housed in Data Lakes or Data Warehouses to extract, transform, and present data efficiently. Conduct data validation, QA checks, and troubleshoot stakeholder-reported issues by performing root cause analysis and implementing solutions. Collaborate with data engineering teams to define data models, KPIs, and automate data pipelines feeding BI tools. Manage ad-hoc and recurring reporting needs, ensuring accuracy, timeliness, and consistency of data outputs. Drive process improvements in dashboard development, data governance, and reporting workflows. Document dashboard specifications, data definitions, and maintain data dictionaries. Stay up to date with industry trends in BI tools, visualization of best practices and emerging data sources in the healthcare and pharma space. Prioritize and manage multiple BI project requests in a fast-paced, dynamic environment. Qualifications 2–4 years of experience in BI development, reporting, or data visualization, preferably in the pharmaceutical or life sciences domain. Strong hands-on experience building dashboards using Tableau (preferred), Power BI, and Qlik. Advanced SQL skills for querying and transforming data across complex data models. Familiarity with pharma data such as Sales, Claims, and secondary market data is a strong plus. Experience in data profiling, cleansing, and standardization techniques. Ability to translate business questions into effective visual analytics. Strong communication skills to interact with stakeholders and present data insights clearly. Self-driven, detail-oriented, and comfortable working with minimal supervision in a team-oriented environment. Exposure to data warehousing concepts and cloud data platforms (e.g., Snowflake, Redshift, or BigQuery) is an advantage. Education Bachelor’s or Master’s Degree (computer science, engineering or other technical disciplines)

Posted 1 week ago

Apply

2.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About Flutter Entertainment Flutter Entertainment is the world’s largest sports betting and iGaming operator with 13.9 million average monthly players worldwide and an annual revenue of $14Bn in 2024. We have a portfolio of iconic brands, including Paddy Power, Betfair, FanDuel, PokerStars, Junglee Games and Sportsbet. Flutter Entertainment is listed on both the New York Stock Exchange (NYSE) and the London Stock Exchange (LSE). In 2024, we were recognized in TIME’s 100 Most Influential Companies under the 'Pioneers' category—a testament to our innovation and impact. Our ambition is to transform global gaming and betting to deliver long-term growth and a positive, sustainable future for our sector. Together, we are Changing the Game! Working at Flutter is a chance to work with a growing portfolio of brands across a range of opportunities. We will support you every step of the way to help you grow. Just like our brands, we ensure our people have everything they need to succeed. Flutter Entertainment India Our Hyderabad office, located in one of India’s premier technology parks is the Global Capability Center for Flutter Entertainment. A center of expertise and innovation, this hub is now home to over 10 00+ talented colleagues working across Customer Service Operations, Data and Technology, Finance Operations, HR Operations, Procurement Operations, and other key enabling functions. We are committed to crafting impactful solutions for all our brands and divisions to power Flutter's incredible growth and global impact. With the scale of a leader and the mindset of a challenger, we’re dedicated to creating a brighter future for our customers, colleagues, and communities. Overview Of The Role We are seeking a technically skilled Regulatory Data Analyst to join our dynamic Data & Analytics (ODA) department in Hyderabad, India. As a globally recognized and highly regulated brand, we are deeply committed to delivering accurate reporting and critical business insights that push the boundaries of our understanding through innovation. You'll be joining a team of exceptional data professionals with a strong command over analytical tools and statistical techniques . You’ll help shape the future of online gaming by leveraging robust technical capabilities to ensure regulatory compliance, support risk management, and strengthen business operations through advanced data solutions. You shall work with large, complex datasets—interrogating and manipulating data using advanced SQL and Python, building scalable dashboards, and developing automation pipelines. Beyond in-depth analysis, you’ll create regulatory reports and visualizations for diverse audiences, and proactively identify areas to enhance efficiency and compliance through technical solutions. KEY RESPONSIBILITES Query data from various database environments (e.g., DB2 , MS SQL Server , Azure ) using Advanced SQL techniques Perform data processing and statistical analysis using tools such as Python, R and Excel Translate regulatory data requirements into structured analysis using robust scripting and automation Design and build interactive dashboards and reporting pipelines using Power BI, Tableau, or MicroStrategy to highlight key metrics and regulatory KPIs Develop compelling data visualizations and executive summaries to communicate complex insights clearly to technical and non-technical stakeholders alike Collaborate with global business stakeholders to interpret jurisdiction-specific regulations and provide technically sound, data-driven insights Recommend enhancements to regulatory processes through data modelling , root cause analysis , and applied statistical techniques (e.g., regression, hypothesis testing) Ensure data quality, governance, and lineage in all deliverables, applying technical rigor and precision To Excel In This Role, You Will Need 2 to 4 years of relevant work experience as a Data Analyst or in a role focused in regulatory or compliance-based analytics Bachelor's degree in a quantitative or technical discipline (e.g, Mathematics, Statistics, Economics, or Computer Science) Proficiency in SQL with the ability to write and optimize complex queries from scratch Strong programming skills in Python (or R) for automation, data wrangling, and statistical analysis Experience using data visualization and BI tools (MicroStrategy, Tableau, PowerBI) to create dynamic dashboards and visual narratives Knowledge of data warehousing environments like Microsoft SQL Server Management Studio or Amazon RedShift Ability to apply statistical methods such as time series analysis, regression, and causal inference to solve regulatory and business problems Benefits We Offer Access to Learnerbly, Udemy, and a Self-Development Fund for upskilling. Career growth through Internal Mobility Programs. Comprehensive Health Insurance for you and dependents. Well-Being Fund and 24/7 Assistance Program for holistic wellness. Hybrid Model: 2 office days/week with flexible leave policies, including maternity, paternity, and sabbaticals. Free Meals, Cab Allowance, and a Home Office Setup Allowance. Employer PF Contribution, gratuity, Personal Accident & Life Insurance. Sharesave Plan to purchase discounted company shares. Volunteering Leave and Team Events to build connections. Recognition through the Kudos Platform and Referral Rewards. Why Choose Us Flutter is an equal-opportunity employer and values the unique perspectives and experiences that everyone brings. Our message to colleagues and stakeholders is clear: everyone is welcome, and every voice matters. We have ambitious growth plans and goals for the future. Here's an opportunity for you to play a pivotal role in shaping the future of Flutter Entertainment India

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

Remote

This is a full-time work from home opportunity for a star BI data engineer from India. IDT (www.idt.net) is a communications and financial services company founded in 1990 and headquartered in New Jersey, US. Today it is an industry leader in prepaid communication and payment services and one of the world’s largest international voice carriers. We are listed on the NYSE, employ over 1500 people across 20+ countries, and have revenues in excess of $1.5 billion. We are looking for a Mid-level Business Intelligence Engineer to join our global team. If you are highly intelligent, motivated, ambitious, ready to learn and make a direct impact, this is your opportunity! The individual in this role will perform data analysis, ELT/ETL design and support functions to deliver on strategic initiatives to meet organizational goals across many lines of business. The interview process will be conducted in English Responsibilities: Develop, document, and test ELT/ETL solutions using industry standard tools (Snowflake, Denodo Data Virtualization, Looker) Recommend process improvements to increase efficiency and reliability in ELT/ETL development Extract data from multiple sources, integrate disparate data into a common data model, and integrate data into a target database, application, or file using efficient ELT/ ETL processes Collaborate with Quality Assurance resources to debug ELT/ETL development and ensure the timely delivery of products Should be willing to explore and learn new technologies and concepts to provide the right kind of solution Target and result oriented with strong end user focus Effective oral and written communication skills with BI team and user community Requirements: 5+ years of experience in ETL/ELT design and development, integrating data from heterogeneous OLTP systems and API solutions, and building scalable data warehouse solutions to support business intelligence and analytics Demonstrated experience in utilizing python for data engineering tasks, including transformation, advanced data manipulation, and large-scale data processing Experience in data analysis, root cause analysis and proven problem solving and analytical thinking capabilities Experience designing complex data pipelines extracting data from RDBMS, JSON, API and Flat file sources Demonstrated expertise in SQL and PLSQL programming, with advanced mastery in Business Intelligence and data warehouse methodologies, along with hands-on experience in one or more relational database systems and cloud-based database services such as Oracle, MySQL, Amazon RDS, Snowflake, Amazon Redshift, etc Proven ability to analyze and optimize poorly performing queries and ETL/ELT mappings, providing actionable recommendations for performance tuning Understanding of software engineering principles and skills working on Unix/Linux/Windows Operating systems, and experience with Agile methodologies Proficiency in version control systems, with experience in managing code repositories, branching, merging, and collaborating within a distributed development environment Excellent English communication skills Interest in business operations and comprehensive understanding of how robust BI systems drive corporate profitability by enabling data-driven decision-making and strategic insights. Pluses Experience in developing ETL/ELT processes within Snowflake and implementing complex data transformations using built-in functions and SQL capabilities Experience using Pentaho Data Integration (Kettle) / Ab Initio ETL tools for designing, developing, and optimizing data integration workflows. Experience designing and implementing cloud-based ETL solutions using Azure Data Factory, DBT, AWS Glue, Lambda and open-source tools Experience with reporting/visualization tools (e.g., Looker) and job scheduler software Experience in Telecom, eCommerce, International Mobile Top-up Education: BS/MS in computer science, Information Systems or a related technical field or equivalent industry expertise Preferred Certification: AWS Solution Architect, AWS Cloud Data Engineer, Snowflake SnowPro Core Please attach CV in English. The interview process will be conducted in English. Only accepting applicants from INDIA

Posted 1 week ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

ZS is a place where passion changes lives. As a management consulting and technology firm focused on improving life and how we live it, our most valuable asset is our people. Here you’ll work side-by-side with a powerful collective of thinkers and experts shaping life-changing solutions for patients, caregivers and consumers, worldwide. ZSers drive impact by bringing a client first mentality to each and every engagement. We partner collaboratively with our clients to develop custom solutions and technology products that create value and deliver company results across critical areas of their business. Bring your curiosity for learning; bold ideas; courage and passion to drive life-changing impact to ZS. Our most valuable asset is our people . At ZS we honor the visible and invisible elements of our identities, personal experiences and belief systems—the ones that comprise us as individuals, shape who we are and make us unique. We believe your personal interests, identities, and desire to learn are part of your success here. Learn more about our diversity, equity, and inclusion efforts and the networks ZS supports to assist our ZSers in cultivating community spaces, obtaining the resources they need to thrive, and sharing the messages they are passionate about. What you’ll do: Collaborate with ZS internal teams and client teams to shape and implement high quality technology solutions that address critical business problems Develop a deep understanding of the business problems and effectively translate them into technical designs Lead modules and workstreams within projects while participating in hands on implementation Work with technical architects to validate the technical design and implementation approach Apply appropriate development methodologies and best practices to ensure exceptional client and project team experience Support the project lead in project delivery, including project planning, people management, staffing, and risk mitigation Manage a diverse team with various skill sets while providing mentorship and coaching to junior members Lead task planning and distribution among team members for timely completion of projects with high-quality results Guide the project deliverables such as business case development, solution vision and design, user requirements, prototypes, technical architecture, test cases, deployment plans and operations strategy What you’ll bring: Bachelor’s degree in Computer Science, Information Technology, or a related field, or equivalent work experience. Experience working in the AWS cloud platform. Data engineer with expertise in developing big data and data warehouse platforms. Experience working with structured and semi-structured data. Expertise in developing big data solutions, ETL/ELT pipelines for data ingestion, data transformation, and optimization techniques. Experience working directly with technical and business teams. Able to create technical documentation. Excellent problem-solving and analytical skills. Strong communication and collaboration abilities. AWS (Big Data services) - S3, Glue, Athena, EMR Programming - Python, Spark, SQL, Mulesoft,Talend, Dbt Data warehouse - ETL, Redshift / Snowflake Additional Skills: Strong communication skills, both verbal and written, with the ability to structure thoughts logically during discussions and presentations Capability to simplify complex concepts into easily understandable frameworks and presentations Proficiency in working within a virtual global team environment, contributing to the timely delivery of multiple projects Travel to other offices as required to collaborate with clients and internal project teams Perks & Benefits: ZS offers a comprehensive total rewards package including health and well-being, financial planning, annual leave, personal growth and professional development. Our robust skills development programs, multiple career progression options and internal mobility paths and collaborative culture empowers you to thrive as an individual and global team member. We are committed to giving our employees a flexible and connected way of working. A flexible and connected ZS allows us to combine work from home and on-site presence at clients/ZS offices for the majority of our week. The magic of ZS culture and innovation thrives in both planned and spontaneous face-to-face connections. Travel: Travel is a requirement at ZS for client facing ZSers; business needs of your project and client are the priority. While some projects may be local, all client-facing ZSers should be prepared to travel as needed. Travel provides opportunities to strengthen client relationships, gain diverse experiences, and enhance professional growth by working in different environments and cultures. Considering applying? At ZS, we're building a diverse and inclusive company where people bring their passions to inspire life-changing impact and deliver better outcomes for all. We are most interested in finding the best candidate for the job and recognize the value that candidates with all backgrounds, including non-traditional ones, bring. If you are interested in joining us, we encourage you to apply even if you don't meet 100% of the requirements listed above. ZS is an equal opportunity employer and is committed to providing equal employment and advancement opportunities without regard to any class protected by applicable law. To Complete Your Application: Candidates must possess or be able to obtain work authorization for their intended country of employment.An on-line application, including a full set of transcripts (official or unofficial), is required to be considered. NO AGENCY CALLS, PLEASE. Find Out More At: www.zs.com

Posted 1 week ago

Apply

10.0 years

0 Lacs

Delhi, India

Remote

JOB_POSTING-3-72796-3 Job Description Role Title: VP, Data Engineering Tech Lead (L12) Company Overview COMPANY OVERVIEW: Synchrony (NYSE: SYF) is a premier consumer financial services company delivering one of the industry’s most complete digitally enabled product suites. Our experience, expertise and scale encompass a broad spectrum of industries including digital, health and wellness, retail, telecommunications, home, auto, outdoors, pet and more. We have recently been ranked #5 among India’s Best Companies to Work for 2023, #21 under LinkedIn Top Companies in India list, and received Top 25 BFSI recognition from Great Place To Work India. We have been ranked Top 5 among India’s Best Workplaces in Diversity, Equity, and Inclusion, and Top 10 among India’s Best Workplaces for Women in 2022. We offer 100% Work from Home flexibility for all our Functional employees and provide some of the best-in-class Employee Benefits and Programs catering to work-life balance and overall well-being. In addition to this, we also have Regional Engagement Hubs across India and a co-working space in Bangalore Organizational Overview Organizational Overview: This role will be part of the Data Architecture & Analytics group part of CTO organization Data team is responsible for designing and developing scalable data pipelines for efficient data ingestion, transformation, and loading(ETL). Collaborating with cross-functional teams to integrate new data sources and ensure data quality and consistency. Building and maintaining data models to facilitate data access and analysis by Data Scientists and Analysts. Responsible for the SYF public cloud platform & services. Govern health, performance, capacity, and costs of resources and ensure adherence to service levels Build well defined processes for cloud application development and service enablement. Role Summary/Purpose We are seeking a highly skilled Cloud Technical Lead with expertise in Data Engineering who will work in multi-disciplinary environments harnessing data to provide valuable impact for our clients. The Cloud Technical Lead will work closely with technology and functional teams to drive migration of legacy on-premises data systems/platforms to cloud-based solutions. The successful candidate will need to develop intimate knowledge of SYF key data domains (originations, loan activity, collection, etc.) and maintain a holistic view across SYF functions to minimize redundancies and optimize the analytics environment. Key Responsibilities Manage end-to-end project lifecycle, including planning, execution, and delivery of cloud-based data engineering projects. Providing guidance on suitable options, designing, and creating data pipeline for the analytical solutions across data lake, data warehouses and cloud implementations. Architect and design robust data pipelines and ETL processes leveraging Ab Initio and Amazon Redshift. Ensure data integration, transformation, and storage process are optimized for scalability and performance in cloud environment. Ensure data security, governance, and compliance in the cloud infrastructure. Provide leadership and guidance to data engineering teams, ensuring best practices are followed. Ensure timely delivery of high-quality solutions in an Agile environment. Required Skills/Knowledge Minimum 10+ years of experience with Bachelor's degree in Computer Science or similar technical field of study or in lieu of a degree 12+ years of relevant experience Minimum 10+ years of experience in managing large scale data platforms (Data warehouse/Data Late/Cloud) environments Minimum 10+ years of financial services experience Minimum 6+ years of experience working with Data Warehouses/Data Lake/Cloud. 6+ years’ of hards-on programming experience in ETL tools - Ab Initio or Informatica highly preferred. Be able to read and reverse engineer the logic in Ab Initio graphs. Hands on experience with cloud platforms such as S3, Redshift, Snowflake, etc. Working knowledge of Hive, Spark, Kafka and other data lake technologies. Strong familiarity with data governance, data lineage, data processes, DML, and data architecture control execution. Experience to analyze system requirements and implement migration methods for existing data. Ability to develop and maintain strong collaborative relationships at all levels across IT and the business. Excellent written and oral communication skills, along with a strong ability to lead and influence others. Experience working iteratively in a fast-paced agile environment. Demonstrated ability to drive change and work effectively across business and geographical boundaries. Expertise in evaluating technology and solution engineering, with strong focus on architecture and deployment of new technology Superior decision-making, client relationship, and vendor management skills. Desired Skills/Knowledge Prior work experience in a credit card/banking/fintech company. Experience dealing with sensitive data in a highly regulated environment. Demonstrated implementation of complex and innovative solutions. Agile experience using JIRA or similar Agile tools. Eligibility Criteria Bachelor's degree in Computer Science or similar technical field of study (Masters degree preferred) Minimum 12+ years of experience in managing large scale data platforms (Data warehouse/Data Late/Cloud) environments Minimum 12+ years of financial services experience Minimum 8+ years of experience working with Oracle Data Warehouses/Data Lake/Cloud 8+ years’ of programming experience in ETL tools - Ab Initio or Informatica highly preferred. Be able to read and reverse engineer the logic in Ab Initio graphs. Hands on experience with cloud platforms such as S3, Redshift, Snowflake, etc. Rigorous data analysis through SQL in Oracle and various Hadoop technologies. Involvement in large scale data analytics migration from on premises to a public cloud Strong familiarity with data governance, data lineage, data processes, DML, and data architecture control execution. Experience to analyze system requirements and implement migration methods for existing data. Excellent written and oral communication skills, along with a strong ability to lead and influence others. Experience working iteratively in a fast-paced agile environment. Work Timings: 3:00 PM IST to 12:00 AM IST (WORK TIMINGS: This role qualifies for Enhanced Flexibility and Choice offered in Synchrony India and will require the incumbent to be available between 06:00 AM Eastern Time – 11:30 AM Eastern Time (timings are anchored to US Eastern hours and will adjust twice a year locally). This window is for meetings with India and US teams. The remaining hours will be flexible for the employee to choose. Exceptions may apply periodically due to business needs. Please discuss this with the hiring manager for more details .) For Internal Applicants Understand the criteria or mandatory skills required for the role, before applying Inform your manager and HRM before applying for any role on Workday Ensure that your professional profile is updated (fields such as education, prior experience, other skills) and it is mandatory to upload your updated resume (Word or PDF format) Must not be any corrective action plan (First Formal/Final Formal, PIP) L10+ Employees who have completed 18 months in the organization and 12 months in current role and level are only eligible. L10+ Employees can apply Level / Grade : 12 Job Family Group Information Technology

Posted 1 week ago

Apply

10.0 years

0 Lacs

Kolkata, West Bengal, India

Remote

JOB_POSTING-3-72796-2 Job Description Role Title: VP, Data Engineering Tech Lead (L12) Company Overview COMPANY OVERVIEW: Synchrony (NYSE: SYF) is a premier consumer financial services company delivering one of the industry’s most complete digitally enabled product suites. Our experience, expertise and scale encompass a broad spectrum of industries including digital, health and wellness, retail, telecommunications, home, auto, outdoors, pet and more. We have recently been ranked #5 among India’s Best Companies to Work for 2023, #21 under LinkedIn Top Companies in India list, and received Top 25 BFSI recognition from Great Place To Work India. We have been ranked Top 5 among India’s Best Workplaces in Diversity, Equity, and Inclusion, and Top 10 among India’s Best Workplaces for Women in 2022. We offer 100% Work from Home flexibility for all our Functional employees and provide some of the best-in-class Employee Benefits and Programs catering to work-life balance and overall well-being. In addition to this, we also have Regional Engagement Hubs across India and a co-working space in Bangalore Organizational Overview Organizational Overview: This role will be part of the Data Architecture & Analytics group part of CTO organization Data team is responsible for designing and developing scalable data pipelines for efficient data ingestion, transformation, and loading(ETL). Collaborating with cross-functional teams to integrate new data sources and ensure data quality and consistency. Building and maintaining data models to facilitate data access and analysis by Data Scientists and Analysts. Responsible for the SYF public cloud platform & services. Govern health, performance, capacity, and costs of resources and ensure adherence to service levels Build well defined processes for cloud application development and service enablement. Role Summary/Purpose We are seeking a highly skilled Cloud Technical Lead with expertise in Data Engineering who will work in multi-disciplinary environments harnessing data to provide valuable impact for our clients. The Cloud Technical Lead will work closely with technology and functional teams to drive migration of legacy on-premises data systems/platforms to cloud-based solutions. The successful candidate will need to develop intimate knowledge of SYF key data domains (originations, loan activity, collection, etc.) and maintain a holistic view across SYF functions to minimize redundancies and optimize the analytics environment. Key Responsibilities Manage end-to-end project lifecycle, including planning, execution, and delivery of cloud-based data engineering projects. Providing guidance on suitable options, designing, and creating data pipeline for the analytical solutions across data lake, data warehouses and cloud implementations. Architect and design robust data pipelines and ETL processes leveraging Ab Initio and Amazon Redshift. Ensure data integration, transformation, and storage process are optimized for scalability and performance in cloud environment. Ensure data security, governance, and compliance in the cloud infrastructure. Provide leadership and guidance to data engineering teams, ensuring best practices are followed. Ensure timely delivery of high-quality solutions in an Agile environment. Required Skills/Knowledge Minimum 10+ years of experience with Bachelor's degree in Computer Science or similar technical field of study or in lieu of a degree 12+ years of relevant experience Minimum 10+ years of experience in managing large scale data platforms (Data warehouse/Data Late/Cloud) environments Minimum 10+ years of financial services experience Minimum 6+ years of experience working with Data Warehouses/Data Lake/Cloud. 6+ years’ of hards-on programming experience in ETL tools - Ab Initio or Informatica highly preferred. Be able to read and reverse engineer the logic in Ab Initio graphs. Hands on experience with cloud platforms such as S3, Redshift, Snowflake, etc. Working knowledge of Hive, Spark, Kafka and other data lake technologies. Strong familiarity with data governance, data lineage, data processes, DML, and data architecture control execution. Experience to analyze system requirements and implement migration methods for existing data. Ability to develop and maintain strong collaborative relationships at all levels across IT and the business. Excellent written and oral communication skills, along with a strong ability to lead and influence others. Experience working iteratively in a fast-paced agile environment. Demonstrated ability to drive change and work effectively across business and geographical boundaries. Expertise in evaluating technology and solution engineering, with strong focus on architecture and deployment of new technology Superior decision-making, client relationship, and vendor management skills. Desired Skills/Knowledge Prior work experience in a credit card/banking/fintech company. Experience dealing with sensitive data in a highly regulated environment. Demonstrated implementation of complex and innovative solutions. Agile experience using JIRA or similar Agile tools. Eligibility Criteria Bachelor's degree in Computer Science or similar technical field of study (Masters degree preferred) Minimum 12+ years of experience in managing large scale data platforms (Data warehouse/Data Late/Cloud) environments Minimum 12+ years of financial services experience Minimum 8+ years of experience working with Oracle Data Warehouses/Data Lake/Cloud 8+ years’ of programming experience in ETL tools - Ab Initio or Informatica highly preferred. Be able to read and reverse engineer the logic in Ab Initio graphs. Hands on experience with cloud platforms such as S3, Redshift, Snowflake, etc. Rigorous data analysis through SQL in Oracle and various Hadoop technologies. Involvement in large scale data analytics migration from on premises to a public cloud Strong familiarity with data governance, data lineage, data processes, DML, and data architecture control execution. Experience to analyze system requirements and implement migration methods for existing data. Excellent written and oral communication skills, along with a strong ability to lead and influence others. Experience working iteratively in a fast-paced agile environment. Work Timings: 3:00 PM IST to 12:00 AM IST (WORK TIMINGS: This role qualifies for Enhanced Flexibility and Choice offered in Synchrony India and will require the incumbent to be available between 06:00 AM Eastern Time – 11:30 AM Eastern Time (timings are anchored to US Eastern hours and will adjust twice a year locally). This window is for meetings with India and US teams. The remaining hours will be flexible for the employee to choose. Exceptions may apply periodically due to business needs. Please discuss this with the hiring manager for more details .) For Internal Applicants Understand the criteria or mandatory skills required for the role, before applying Inform your manager and HRM before applying for any role on Workday Ensure that your professional profile is updated (fields such as education, prior experience, other skills) and it is mandatory to upload your updated resume (Word or PDF format) Must not be any corrective action plan (First Formal/Final Formal, PIP) L10+ Employees who have completed 18 months in the organization and 12 months in current role and level are only eligible. L10+ Employees can apply Level / Grade : 12 Job Family Group Information Technology

Posted 1 week ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

JOB_POSTING-3-72796-1 Job Description Role Title: VP, Data Engineering Tech Lead (L12) Company Overview COMPANY OVERVIEW: Synchrony (NYSE: SYF) is a premier consumer financial services company delivering one of the industry’s most complete digitally enabled product suites. Our experience, expertise and scale encompass a broad spectrum of industries including digital, health and wellness, retail, telecommunications, home, auto, outdoors, pet and more. We have recently been ranked #5 among India’s Best Companies to Work for 2023, #21 under LinkedIn Top Companies in India list, and received Top 25 BFSI recognition from Great Place To Work India. We have been ranked Top 5 among India’s Best Workplaces in Diversity, Equity, and Inclusion, and Top 10 among India’s Best Workplaces for Women in 2022. We offer 100% Work from Home flexibility for all our Functional employees and provide some of the best-in-class Employee Benefits and Programs catering to work-life balance and overall well-being. In addition to this, we also have Regional Engagement Hubs across India and a co-working space in Bangalore Organizational Overview Organizational Overview: This role will be part of the Data Architecture & Analytics group part of CTO organization Data team is responsible for designing and developing scalable data pipelines for efficient data ingestion, transformation, and loading(ETL). Collaborating with cross-functional teams to integrate new data sources and ensure data quality and consistency. Building and maintaining data models to facilitate data access and analysis by Data Scientists and Analysts. Responsible for the SYF public cloud platform & services. Govern health, performance, capacity, and costs of resources and ensure adherence to service levels Build well defined processes for cloud application development and service enablement. Role Summary/Purpose We are seeking a highly skilled Cloud Technical Lead with expertise in Data Engineering who will work in multi-disciplinary environments harnessing data to provide valuable impact for our clients. The Cloud Technical Lead will work closely with technology and functional teams to drive migration of legacy on-premises data systems/platforms to cloud-based solutions. The successful candidate will need to develop intimate knowledge of SYF key data domains (originations, loan activity, collection, etc.) and maintain a holistic view across SYF functions to minimize redundancies and optimize the analytics environment. Key Responsibilities Manage end-to-end project lifecycle, including planning, execution, and delivery of cloud-based data engineering projects. Providing guidance on suitable options, designing, and creating data pipeline for the analytical solutions across data lake, data warehouses and cloud implementations. Architect and design robust data pipelines and ETL processes leveraging Ab Initio and Amazon Redshift. Ensure data integration, transformation, and storage process are optimized for scalability and performance in cloud environment. Ensure data security, governance, and compliance in the cloud infrastructure. Provide leadership and guidance to data engineering teams, ensuring best practices are followed. Ensure timely delivery of high-quality solutions in an Agile environment. Required Skills/Knowledge Minimum 10+ years of experience with Bachelor's degree in Computer Science or similar technical field of study or in lieu of a degree 12+ years of relevant experience Minimum 10+ years of experience in managing large scale data platforms (Data warehouse/Data Late/Cloud) environments Minimum 10+ years of financial services experience Minimum 6+ years of experience working with Data Warehouses/Data Lake/Cloud. 6+ years’ of hards-on programming experience in ETL tools - Ab Initio or Informatica highly preferred. Be able to read and reverse engineer the logic in Ab Initio graphs. Hands on experience with cloud platforms such as S3, Redshift, Snowflake, etc. Working knowledge of Hive, Spark, Kafka and other data lake technologies. Strong familiarity with data governance, data lineage, data processes, DML, and data architecture control execution. Experience to analyze system requirements and implement migration methods for existing data. Ability to develop and maintain strong collaborative relationships at all levels across IT and the business. Excellent written and oral communication skills, along with a strong ability to lead and influence others. Experience working iteratively in a fast-paced agile environment. Demonstrated ability to drive change and work effectively across business and geographical boundaries. Expertise in evaluating technology and solution engineering, with strong focus on architecture and deployment of new technology Superior decision-making, client relationship, and vendor management skills. Desired Skills/Knowledge Prior work experience in a credit card/banking/fintech company. Experience dealing with sensitive data in a highly regulated environment. Demonstrated implementation of complex and innovative solutions. Agile experience using JIRA or similar Agile tools. Eligibility Criteria Bachelor's degree in Computer Science or similar technical field of study (Masters degree preferred) Minimum 12+ years of experience in managing large scale data platforms (Data warehouse/Data Late/Cloud) environments Minimum 12+ years of financial services experience Minimum 8+ years of experience working with Oracle Data Warehouses/Data Lake/Cloud 8+ years’ of programming experience in ETL tools - Ab Initio or Informatica highly preferred. Be able to read and reverse engineer the logic in Ab Initio graphs. Hands on experience with cloud platforms such as S3, Redshift, Snowflake, etc. Rigorous data analysis through SQL in Oracle and various Hadoop technologies. Involvement in large scale data analytics migration from on premises to a public cloud Strong familiarity with data governance, data lineage, data processes, DML, and data architecture control execution. Experience to analyze system requirements and implement migration methods for existing data. Excellent written and oral communication skills, along with a strong ability to lead and influence others. Experience working iteratively in a fast-paced agile environment. Work Timings: 3:00 PM IST to 12:00 AM IST (WORK TIMINGS: This role qualifies for Enhanced Flexibility and Choice offered in Synchrony India and will require the incumbent to be available between 06:00 AM Eastern Time – 11:30 AM Eastern Time (timings are anchored to US Eastern hours and will adjust twice a year locally). This window is for meetings with India and US teams. The remaining hours will be flexible for the employee to choose. Exceptions may apply periodically due to business needs. Please discuss this with the hiring manager for more details .) For Internal Applicants Understand the criteria or mandatory skills required for the role, before applying Inform your manager and HRM before applying for any role on Workday Ensure that your professional profile is updated (fields such as education, prior experience, other skills) and it is mandatory to upload your updated resume (Word or PDF format) Must not be any corrective action plan (First Formal/Final Formal, PIP) L10+ Employees who have completed 18 months in the organization and 12 months in current role and level are only eligible. L10+ Employees can apply Level / Grade : 12 Job Family Group Information Technology

Posted 1 week ago

Apply

3.0 years

0 Lacs

Kochi, Kerala, India

On-site

Position- Data Engineer Experience- 3+ years Location : Trivandrum, Hybrid Salary : Upto 8 LPA Job Summary We are seeking a highly motivated and skilled Data Engineer with 3+ years of experience to join our growing data team. In this role, you will be instrumental in designing, building, and maintaining robust, scalable, and efficient data pipelines and infrastructure. You will work closely with data scientists, analysts, and other engineering teams to ensure data availability, quality, and accessibility for various analytical and machine learning initiatives. Key Responsibilities Design and Development: ○ Design, develop, and optimize scalable ETL/ELT pipelines to ingest, transform, and load data from diverse sources into data warehouses/lakes. ○ Implement data models and schemas that support analytical and reporting requirements. ○ Build and maintain robust data APIs for data consumption by various applications and services. Data Infrastructure: ○ Contribute to the architecture and evolution of our data platform, leveraging cloud services (AWS, Azure, GCP) or on-premise solutions. ○ Ensure data security, privacy, and compliance with relevant regulations. ○ Monitor data pipelines for performance, reliability, and data quality, implementing alerting and anomaly detection. Collaboration & Optimization: ○ Collaborate with data scientists, business analysts, and product managers to understand data requirements and translate them into technical solutions. ○ Optimize existing data processes for efficiency, cost-effectiveness, and performance. ○ Participate in code reviews, contribute to documentation, and uphold best practices in data engineering. Troubleshooting & Support: ○ Diagnose and resolve data-related issues, ensuring minimal disruption to data consumers. ○ Provide support and expertise to teams consuming data from the data platform. Required Qualifications Bachelor's degree in Computer Science, Engineering, or a related quantitative field. 3+ years of hands-on experience as a Data Engineer or in a similar role. Strong proficiency in at least one programming language commonly used for data engineering (e.g., Python, Java, Scala). Extensive experience with SQL and relational databases (e.g., PostgreSQL, MySQL, SQL Server). Proven experience with ETL/ELT tools and concepts. Experience with data warehousing concepts and technologies (e.g., Snowflake, Redshift, BigQuery, Azure Synapse, Data Bricks). Familiarity with cloud platforms (AWS, Azure, or GCP) and their data services (e.g., S3, EC2, Lambda, Glue, Data Factory, Blob Storage, BigQuery, Dataflow). Understanding of data modeling techniques (e.g., dimensional modeling, Kimball, Inmon). Experience with version control systems (e.g., Git). Excellent problem-solving, analytical, and communication skills. Preferred Qualifications Master's degree in a relevant field. Experience with Apache Spark (PySpark, Scala Spark) or other big data processing frameworks. Familiarity with NoSQL databases (e.g., MongoDB, Cassandra). Experience with data streaming technologies (e.g., Kafka, Kinesis). Knowledge of containerization technologies (e.g., Docker, Kubernetes). Experience with workflow orchestration tools (e.g., Apache Airflow, Azure Data Factory, AWS Step Functions). Understanding of DevOps principles as applied to data pipelines. Prior experience in Telecom is a plus.

Posted 1 week ago

Apply

7.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

7+ years of experience Job Title: AWS Data Engineer Location: Navi Mumbai Job type: Full time Work mode: WFO - 5 days Key Responsibilities: Design, develop, and maintain scalable ETL/ELT data pipelines using AWS cloud technologies. Work with PySpark and SQL to process large datasets efficiently. Manage AWS services such as Apache Airflow, Redshift, EMR, CloudWatch, and S3 for data processing and orchestration. Implement CI/CD pipelines using Azure DevOps for seamless deployment and automation. Monitor and optimize data workflows for performance, cost, and reliability. Utilize Jupyter Notebooks for data exploration and analysis. Collaborate with cross-functional teams, including data scientists, analysts, and software engineers, to ensure smooth data integration. Implement Unix commands and Git for version control and automation. Ensure best practices for data governance, security, and compliance in cloud environments.

Posted 1 week ago

Apply

10.0 - 15.0 years

20 Lacs

Chennai

Work from Office

Candidate Specification: Any Graduate, Min 10+; years relevant Experience; Job Description: Strong hands-on experience with the following: Snowflake, Redshift, Big Query.; Proficiency in; Data Build Tool - DBT and SQL-based data modeling and transformation. Solid understanding of data warehousing concepts, star/snowflake schemas, and performance optimization. Experience with modern ETL/ELT tools and cloud-based data pipeline frameworks. Familiarity with version control systems (e.g., Git) and CI/CD practices for data workflows. Strong problem-solving skills and attention to detail. Should have excellent Inter Personal skill. Contact Person: Deepikad Email ID : deepikad@gojobs.biz

Posted 1 week ago

Apply

7.0 years

32 - 35 Lacs

Gurugram, Haryana, India

Remote

Experience : 7.00 + years Salary : INR 3200000-3500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Socialtrait) (*Note: This is a requirement for one of Uplers' client - California based AI-driven insights and audience analytics agency) What do you need for this opportunity? Must have skills required: BI Products, BigQuery, Embedded AI into Saas products, Predictive Analytics, PowerBI, Snowflake, Google Cloud Platform, Python, SQL, M - Code California based AI-driven insights and audience analytics agency is Looking for: Senior Power BI & Consumer Insights Specialist Remote Full-time Data & Insights Why this role matters Socialtrait’s AI platform captures millions of real-time consumer signals through virtual AI communities. Socialtrait AI is a fast-growing analytics and intelligence platform helping brands understand their audience, performance, and competitors across digital and social channels. We're driven by data and obsessed with delivering actionable insights that make an impact. We need a builder who can transform those streams into razor-sharp dashboards that brand, product, and marketing teams act on daily. You’ll be the go-to Power BI expert, owning the full build-run-optimise cycle of dashboards that guide C-level decisions for global consumer brands—no line management, pure impact. What You’ll Do Design & ship dashboards end-to-end – wireframe, model, develop, and deploy Power BI workspaces that surface campaign performance, competitive moves, social buzz, and conversion KPIs in minutes, not weeks. Tell insight-rich stories – turn data into narratives that brand managers, CMOs, and product teams can take to the board. Engineer robust data models – build scalable semantic layers across SQL warehouses (BigQuery, Snowflake, Redshift) and behavioural APIs. Push Power BI to its limits – advanced DAX, M-code, incremental refresh, and performance tuning so reports load in under three seconds. Embed with clients & stakeholders – join working sessions with Fortune 500 insights teams; translate hypotheses into metrics and experiments. Prototype the future – pilot AI-assisted insight generation, embedded analytics, and real-time sentiment widgets. The calibre we’re after 7+ years crafting enterprise BI products, 4+ years deep in Power BI. Proven success delivering dashboards for consumer-facing organisations (CPG, retail, media, fintech, or D2C) where insights directly shaped product or campaign strategy. Master-level DAX, Power Query, and SQL; comfortable scripting in Python or R for heavier modelling. Fluency with cloud data platforms. Demonstrated ability to influence executives through data—your dashboards have redirected budgets or product roadmaps. Bonus: predictive analytics, time-series forecasting, or embedding BI into SaaS products. How We’ll Support You Competitive salary + meaningful equity upside. A culture that values truthful insights over buzzwords—your work becomes the daily heartbeat of decision-making. Our hiring process Intro chat (30 min) – mutual fit & mission alignment. Technical deep-dive – walk us through a dashboard you’re proud of (screenshare). Case challenge – you redesign a key view from an anonymised consumer dataset in Power BI and discuss your choices. Exec panel – strategy discussion with CEO, COO, and Head of Product. Offer & roadmap session – align on your first-90-day impact plan. Ready to build the dashboards that power the next wave of consumer-insight AI? Let’s talk How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

3.0 - 6.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Title: Data Engineer Location: Chennai Experience Level: 3-6 Years Employment Type: Full-time About Us: SuperOps is a SaaS start-up empowering IT service providers and IT teams around the world with technology that is cutting-edge, future-ready, and powered by AI. We are backed by marquee investors like Addition, March Capital, Matrix Partners India, Elevation Capital, and Tanglin Venture Partners. Founded by Arvind Parthiban, a serial entrepreneur, and Jayakumar Karumbasalam, a veteran in the IT space, SuperOps is built on the back of a team of engineers, product architects, designers, and AI experts, who want to reshape the world of IT. Now we have taken on a market that is plagued by legacy solutions and subpar experiences. The potential to do something great is immense. So if you love to grow, be part of a kickass team that inspires you to do more, and make an everlasting mark in the world of IT, SuperOps is the place to be. We also believe that the journey is as important as the destination. We want to build the best products out there and have fun while doing so. So come, and be part of our A-star team of superheroes. We are looking for a talented Senior Front-End Engineer to join our engineering team. As a senior member of our team, you will be responsible for creating responsive, efficient, and engaging user interfaces for our platform. Role Summary: We are seeking a skilled and motivated Data Engineer to join our growing team. In this role, you will be instrumental in designing, building, and maintaining our data infrastructure, ensuring that reliable and timely data is available for analysis across the organization. You will work closely with various teams to integrate data from diverse sources and transform it into actionable insights that drive our business forward. Key Responsibilities: Design, develop, and maintain scalable and robust data pipelines to ingest data from various sources, including CRM systems (e.g., Salesforce), Billing platforms, Product Analytics tools (e.g., Mixpanel, Amplitude), and Marketing platforms (e.g., Google Ads, Hubspot). Build, manage, and optimize our data warehouse to serve as the central repository for all business-critical data. Implement and manage efficient data synchronization processes between source systems and the data warehouse. Oversee the storage and management of raw data, ensuring data integrity and accessibility. Develop and maintain data transformation pipelines (ETL/ELT) to process raw data into clean, structured formats suitable for analytics, reporting, and dashboarding. Ensure seamless synchronization and consistency between raw and processed data layers. Collaborate with data analysts, product managers, and other stakeholders to understand data needs and deliver appropriate data solutions. Monitor data pipeline performance, troubleshoot issues, and implement improvements for efficiency and reliability. Document data processes, architectures, and definitions. Qualifications: Proven experience as a Data Engineer for 5 to 8 years of experience Strong experience in designing, building, and maintaining data pipelines and ETL/ELT processes. Proficiency with data warehousing concepts and technologies (e.g., BigQuery, Redshift, Snowflake, Databricks). Experience integrating data from various APIs and databases (SQL, NoSQL). Solid understanding of data modeling principles. Proficiency in programming languages commonly used in data engineering (e.g., Python, SQL). Experience with workflow orchestration tools (e.g., Airflow, Prefect, Dagster). Familiarity with cloud platforms (e.g., AWS, GCP, Azure). Excellent problem-solving and analytical skills. Strong communication and collaboration abilities. Bonus Points: Experience working in a SaaS company. Understanding of key SaaS business metrics (e.g., MRR, ARR, Churn, LTV, CAC). Experience with data visualization tools (e.g., Tableau, Looker, Power BI). Familiarity with containerization technologies (e.g., Docker, Kubernetes). Why Join Us? Impact: You'll work on a product that is revolutionising IT service management for MSPs and IT teams worldwide. Growth: SuperOps is growing rapidly, and there are ample opportunities for career progression and leadership roles. Collaboration: Work with talented engineers, designers, and product managers in a supportive and innovative environment

Posted 1 week ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About McDonald’s: One of the world’s largest employers with locations in more than 100 countries, McDonald’s Corporation has corporate opportunities in Hyderabad. Our global offices serve as dynamic innovation and operations hubs, designed to expand McDonald's global talent base and in-house expertise. Our new office in Hyderabad will bring together knowledge across business, technology, analytics, and AI, accelerating our ability to deliver impactful solutions for the business and our customers across the globe. Position Summary: Data Accessibility Lead: (Sr Manager, Data Operations & Management) As the Data Accessibility Lead, you will drive the enterprise-wide strategy for enabling secure, governed, and scalable access to data for AI / ML, analytics, and business operations. You will lead cross-functional teams responsible for managing data lifecycle, enforcing data quality standards, and implementing modern governance tooling such as Collibra. This role is pivotal to operationalizing data accessibility across cloud platforms like GCP and AWS, including BigQuery, Redshift, and other core data infrastructure. Who we are looking for: Primary Responsibilities: Strategic Data Accessibility Leadership: Set the strategic direction for enterprise data accessibility, ensuring consistent and secure access across teams and platforms. Lead the implementation and adoption of data governance tools (e.g., Collibra ) to manage metadata, lineage, and data policies. Champion enterprise adoption of semantic and technical metadata practices for improved discoverability and data use. AI / ML Enablement: Oversee the availability, quality, and governance of data used for AI / ML model development and lifecycle management. Ensure that model training, validation, and deployment pipelines have reliable and timely access to governed datasets. Partner with MLOps, engineering, and product teams to embed data accessibility standards in model workflows. Cloud Platform Integration: Oversee data accessibility initiatives in GCP and AWS , including integration with BigQuery, Redshift, and cloud-native storage. Develop strategies for managing access controls, encryption, and auditability of data assets across cloud environments. Data Governance & Quality Oversight: Define and enforce enterprise data quality standards , including data profiling, validation, and exception handling workflows. Ensure compliance with internal data policies and external regulations (e.g., GDPR, HIPAA, CCPA). Lead enterprise initiatives around data lifecycle management , from ingestion and processing to archival and retention. Cross-Functional Collaboration & Leadership: Lead and mentor a team of data operations professionals and collaborate with data engineering, governance, AI/ML, and compliance teams. Provide executive-level insights and recommendations for improving enterprise data accessibility, quality, and governance practices. Drive alignment between business units, technical teams, and compliance functions through effective data stewardship. Skill: 8+ years of experience in data operations, data governance, or data quality management, with at least 3 years in a strategic leadership capacity. Strong hands-on and strategic experience with: Collibra or similar data governance platforms Cloud platforms: Google Cloud Platform (GCP), Amazon Web Services (AWS) Enterprise data warehouses such as Big Query, Redshift, or Snowflake AI/ML model lifecycle support and MLOps integration Data quality frameworks, metadata management, and data access policy enforcement SQL Strong analytical and problem-solving skills; ability to work across highly matrixed, global organizations. Exceptional communication, leadership, and stakeholder management skills. Bachelor’s or master’s degree in data science, Information Systems, or a related field. Preferred Experience: Experience in Retail or Quick Service Restaurant (QSR) environments with operational and real-time analytics needs. Familiarity with data mesh concepts, data product ownership, and domain-based accessibility strategies. Experience navigating privacy, residency, or regulatory compliance in global data environments. Current GCP Associates (or Professional) Certification. Work location: Hyderabad, India Work pattern: Full time role. Work mode: Hybrid. Additional Information: McDonald’s is committed to providing qualified individuals with disabilities with reasonable accommodations to perform the essential functions of their jobs. McDonald’s provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to sex, sex stereotyping, pregnancy (including pregnancy, childbirth, and medical conditions related to pregnancy, childbirth, or breastfeeding), race, color, religion, ancestry or national origin, age, disability status, medical condition, marital status, sexual orientation, gender, gender identity, gender expression, transgender status, protected military or veteran status, citizenship status, genetic information, or any other characteristic protected by federal, state or local laws. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation and training. McDonald’s Capability Center India Private Limited (“McDonald’s in India”) is a proud equal opportunity employer and is committed to hiring a diverse workforce and sustaining an inclusive culture. At McDonald’s in India, employment decisions are based on merit, job requirements, and business needs, and all qualified candidates are considered for employment. McDonald’s in India does not discriminate based on race, religion, colour, age, gender, marital status, nationality, ethnic origin, sexual orientation, political affiliation, veteran status, disability status, medical history, parental status, genetic information, or any other basis protected under state or local laws. Nothing in this job posting or description should be construed as an offer or guarantee of employment.

Posted 1 week ago

Apply

4.0 - 7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About McDonald’s: One of the world’s largest employers with locations in more than 100 countries, McDonald’s Corporation has corporate opportunities in Hyderabad. Our global offices serve as dynamic innovation and operations hubs, designed to expand McDonald's global talent base and in-house expertise. Our new office in Hyderabad will bring together knowledge across business, technology, analytics, and AI, accelerating our ability to deliver impactful solutions for the business and our customers across the globe. Position Summary: Data Accessibility - Supervisor, Data Operations & Management As the Data Accessibility Lead, you will drive the enterprise-wide strategy for enabling secure, governed, and scalable access to data for AI/ML, analytics, and business operations. You will lead cross-functional teams responsible for managing data lifecycle, enforcing data quality standards, and implementing modern governance tooling such as Collibra. This role is pivotal to operationalizing data accessibility across cloud platforms like GCP and AWS, including BigQuery, Redshift, and other core data infrastructure. Who we are looking for: Primary Responsibilities: Strategic Data Accessibility Leadership: Set the strategic direction for enterprise data accessibility, ensuring consistent and secure access across teams and platforms. Lead the implementation and adoption of data governance tools (e.g., Collibra) to manage metadata, lineage, and data policies. Champion enterprise adoption of semantic and technical metadata practices for improved discoverability and data use. AI / ML Enablement: Oversee the availability, quality, and governance of data used for AI / ML model development and lifecycle management. Ensure that model training, validation, and deployment pipelines have reliable and timely access to governed datasets. Partner with MLOps, engineering, and product teams to embed data accessibility standards in model workflows. Cloud Platform Integration: Oversee data accessibility initiatives in GCP and AWS, including integration with BigQuery, Redshift, and cloud-native storage. Develop strategies for managing access controls, encryption, and auditability of data assets across cloud environments. Data Governance & Quality Oversight: Define and enforce enterprise data quality standards, including data profiling, validation, and exception handling workflows. Ensure compliance with internal data policies and external regulations (e.g., GDPR, HIPAA, CCPA). Lead enterprise initiatives around data lifecycle management, from ingestion and processing to archival and retention. Cross-Functional Collaboration & Leadership: Lead and mentor a team of data operations professionals and collaborate with data engineering, governance, AI/ML, and compliance teams. Provide executive-level insights and recommendations for improving enterprise data accessibility, quality, and governance practices. Drive alignment between business units, technical teams, and compliance functions through effective data stewardship. Skill: 4 to 7 years of experience in data operations, data governance, or data quality management, with at least 3 years in a strategic leadership capacity. Strong hands-on and strategic experience with: Collibra or similar data governance platforms Cloud platforms: Google Cloud Platform (GCP), Amazon Web Services (AWS) Enterprise data warehouses such as Big Query, Redshift, or Snowflake AI/ML model lifecycle support and MLOps integration Data quality frameworks, metadata management, and data access policy enforcement SQL and enterprise-scale ETL/ELT pipelines Strong analytical and problem-solving skills; ability to work across highly matrixed, global organizations. Exceptional communication, leadership, and stakeholder management skills. Bachelor’s or Master’s degree in Data Science, Information Systems, or a related field. Preferred Experience: Experience in Retail or Quick Service Restaurant (QSR) environments with operational and real-time analytics needs. Familiarity with data mesh concepts, data product ownership, and domain-based accessibility strategies. Experience navigating privacy, residency, or regulatory compliance in global data environments. Current GCP Associates (or Professional) Certification. Work location: Hyderabad, India Work pattern: Full time role. Work mode: Hybrid. Additional Information: McDonald’s is committed to providing qualified individuals with disabilities with reasonable accommodations to perform the essential functions of their jobs. McDonald’s provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to sex, sex stereotyping, pregnancy (including pregnancy, childbirth, and medical conditions related to pregnancy, childbirth, or breastfeeding), race, color, religion, ancestry or national origin, age, disability status, medical condition, marital status, sexual orientation, gender, gender identity, gender expression, transgender status, protected military or veteran status, citizenship status, genetic information, or any other characteristic protected by federal, state or local laws. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation and training. McDonald’s Capability Center India Private Limited (“McDonald’s in India”) is a proud equal opportunity employer and is committed to hiring a diverse workforce and sustaining an inclusive culture. At McDonald’s in India, employment decisions are based on merit, job requirements, and business needs, and all qualified candidates are considered for employment. McDonald’s in India does not discriminate based on race, religion, colour, age, gender, marital status, nationality, ethnic origin, sexual orientation, political affiliation, veteran status, disability status, medical history, parental status, genetic information, or any other basis protected under state or local laws. Nothing in this job posting or description should be construed as an offer or guarantee of employment.

Posted 1 week ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About Hevo: Postman, Zepto, ThoughtSpot, Whatfix, Shopify, DoorDash, and thousands of other data-driven companies share one thing. They all use Hevo Data's fully managed Automated Pipelines to consolidate their data from multiple sources like Databases, Marketing Applications, Cloud Storage, SDKs, Streaming Services, etc. We are a San Francisco/Bangalore-based company with 2000+ customers spread across 40+ countries in domains such as e-commerce, financial technology, and healthcare. Strongly backed by marquee investors like Sequoia Capital, Chiratae, and Qualgro, we have raised $43 Million to date and are looking forward to our next phase of hyper-growth! Why Do we exist? Every company today wants to leverage cutting-edge technology - Artificial Intelligence, Machine Learning, and Predictive Analytics - to make smarter business decisions. Data is the foundational block on which these advanced techniques can be applied. However, every company's business users struggle to access accurate and reliable data. Data resides fragmented across the 100s of business software that businesses use. Most of the data operators spend time manually consolidating it, and they spend too little time deriving insights. If this ‘collection’ can be automated, it can lead to making business decisions faster, unlocking exponential growth, and delivering a superior experience to customers. At Hevo, our mission is to enable every company to be data-driven. We started on this journey 4 years back, and as the first step in this direction, we built our first product – “Data Pipeline” or simply “Pipeline”. Hevo Pipeline is a no-code platform that helps companies connect all their data sources within the company to get a unified view of their business. The platform offers integrations with 150+ data sources, such as Databases, SaaS applications, Advertising, and Channels. Today, we enable nearly 2000 companies across more than 40+ countries to be more data-driven. We aim to make the technology so simple that anyone can solve their data problems without being limited due to their lack of technical skills. Product Demo and our customers love us - Hevo is rated at the top of the G2 crowd in the Data Pipeline space: Hevo Data Reviews 2025: Details, Pricing, & Features | G2 About the Role: Platform Product Owner – Data Pipelines We’re looking for a product-driven, data-savvy Platform Product Owner to lead the evolution of Hevo’s Data Pipelines Platform. This role blends strategic product thinking with operational excellence and offers full ownership—from defining product outcomes to driving delivery health and platform reliability. You’ll work closely with Engineering, Architecture, and cross-functional teams to shape the platform roadmap, define user value, and ensure successful outcomes through measurable impact. If you're passionate about building scalable, high-impact data products—and excel at balancing strategy with execution—this role is for you. Key Responsibilities: Product Ownership & Strategy Define and evolve the product vision and roadmap in collaboration with Product Leadership Translate vision into a value-driven, structured product backlog focused on scalability, reliability, and user outcomes Craft clear user stories with well-defined acceptance criteria and success metrics Partner with Engineering and Architecture to design and iterate on platform capabilities aligned with long-term strategy Analyze competitive products to identify experience gaps, technical differentiators, and new opportunities Ensure platform capabilities deliver consistent value to internal teams and end users Product Operations & Delivery Insights Define and track key product health metrics (e.g., uptime, throughput, SLA adherence, adoption) Foster a metrics-first culture in product delivery—ensuring every backlog item ties to measurable outcomes Triage bugs and feature requests, assess impact, and feed insights into prioritization and planning Define post-release success metrics and establish feedback loops to evaluate feature adoption and performance Build dashboards and reporting frameworks to increase visibility into product readiness, velocity, and operations Improve practices around backlog hygiene, estimation accuracy, and story lifecycle management Ensure excellence in release planning and launch execution to meet quality and scalability benchmarks Collaboration & Communication Champion the product vision and user needs across all stages of development Collaborate with Support, Customer Success, and Product Marketing to ensure customer insights inform product direction Develop enablement materials (e.g., internal walkthroughs, release notes) to support go-to-market and support teams Drive alignment and accountability throughout the product lifecycle—from planning to post-release evaluation Qualifications: Required Bachelor’s degree in Computer Science or a related engineering field 5+ years of experience as a Product Manager/Product Owner, with time spent on platform/infrastructure products at B2B startups Hands-on experience with ETL tools or modern data platforms (e.g., Talend, Informatica, AWS Glue, Snowflake, BigQuery, Redshift, Databricks) Strong understanding of the product lifecycle with an operations-focused mindset Proven ability to collaborate with engineering teams to build scalable, reliable features Familiarity with data integration, APIs, connectors, and streaming/real-time data pipelines Analytical mindset with experience tracking KPIs and making data-informed decisions Excellent communication and cross-functional collaboration skills Proficiency with agile product development tools (e.g., Jira, Aha!, Linear) Preferred Experience in a data-intensive environment Engineer-turned-Product Manager with a hands-on technical background MBA from a Tier-1 institute

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies