Home
Jobs

1768 Redshift Jobs - Page 34

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

40.0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

Linkedin logo

Who We Are Escalent is an award-winning data analytics and advisory firm that helps clients understand human and market behaviors to navigate disruption. As catalysts of progress for more than 40 years, our strategies guide the world’s leading brands. We accelerate growth by creating a seamless flow between primary, secondary, syndicated, and internal business data, providing consulting and advisory services from insights through implementation. Based on a profound understanding of what drives human beings and markets, we identify actions that build brands, enhance customer experiences, inspire product innovation and boost business productivity. We listen, learn, question, discover, innovate, and deliver—for each other and our clients—to make the world work better for people. Why Escalent? Once you join our team you will have the opportunity to... Access experts across industries for maximum learning opportunities including Weekly Knowledge Sharing Sessions, LinkedIn Learning, and more. Gain exposure to a rich variety of research techniques from knowledgeable professionals. Enjoy a remote first/hybrid work environment with a flexible schedule. Obtain insights into the needs and challenges of your clients—to learn how the world’s leading brands use research. Experience peace of mind working for a company with a commitment to conducting research ethically. Build lasting relationships with fun colleagues in a culture that values each person. Role Overview We are looking for a Data Engineer to design, build, and optimize scalable data pipelines and infrastructure that power analytics, machine learning, and business intelligence. You will work closely with data scientists, analysts, and software engineers to ensure efficient data ingestion, transformation, and management. Roles & Responsibilities Design, develop, and maintain scalable ETL/ELT pipelines to extract, transform, and load data from diverse sources Build and optimize data storage solutions using SQL and NoSQL databases, data lakes, and cloud warehouses (Snowflake, BigQuery, Redshift) Ensure data quality, integrity, and security through automated validation, governance, and monitoring frameworks Collaborate with data scientists and analysts to provide clean, structured, and accessible data for reporting and AI/ML models Implement best practices for performance tuning, indexing, and query optimization to handle large-scale datasets Stay updated with emerging data engineering technologies, architectures, and industry best practices Write clean and structured code as defined in the team’s coding standards and creating documentation for best practices Stay updated with emerging technologies, frameworks, and industry trends Required Skills Minimum 6 years of experience in Python, SQL, and data processing frameworks (Pandas, Spark, Hadoop) Experience with cloud-based data platforms (AWS, Azure, GCP) and services like S3, Glue, Athena, Data Factory, or BigQuery Solid understanding of database design, data modeling and warehouse architectures Hands-on experience with ETL/ELT pipelines and workflow orchestration tools (Apache Airflow, Prefect, Luigi) Knowledge of APIs, RESTful services and integrating multiple data sources Strong problem-solving and debugging skills in handling large-scale data processing challenges Experience with version control systems (Git, GitHub, GitLab) Ability to work in a team setting Organizational and time management skills Desirable Skills Experience working with Agile development methodologies Experience in building self-service data platforms for business users and analysts Effective skills in written and verbal communication Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Description AWS Infrastructure Services owns the design, planning, delivery, and operation of all AWS global infrastructure. In other words, we’re the people who keep the cloud running. We support all AWS data centers and all of the servers, storage, networking, power, and cooling equipment that ensure our customers have continual access to the innovation they rely on. We work on the most challenging problems, with thousands of variables impacting the supply chain — and we’re looking for talented people who want to help. You’ll join a diverse team of software, hardware, and network engineers, supply chain specialists, security experts, operations managers, and other vital roles. You’ll collaborate with people across AWS to help us deliver the highest standards for safety and security while providing seemingly infinite capacity at the lowest possible cost for our customers. And you’ll experience an inclusive culture that welcomes bold ideas and empowers you to own them to completion.Amazon Web Services (AWS) provides a highly reliable, scalable, and low-cost cloud platform that powers thousands of businesses in over 190 countries. AWS Infrastructure Supply Chain (AIS-SC) organization works to deliver innovative solutions to source, build and maintain our socially responsible data center supply chains. We are a team of highly-motivated, engaged, and responsive professionals who enable the core sustainable infrastructure of AWS. Come join our team and be a part of history as we deliver results for the largest cloud services company on Earth! Do you love problem solving? Do you enjoy learning new ideas and apply them to problems? Are you looking for real world engineering challenges? Do you dream about elegant high quality solutions? Want to be a part of an amazing team that delivers first class analytical solutions to our business world-wide? AWS is seeking a highly motivated and passionate Data Engineer who is responsible for designing, developing, testing, and deploying Supply Chain Analytical Solutions. In this role you will collaborate with business leaders, work backwards from customers, identify problems, propose innovative solutions, relentlessly raise standards, and have a positive impact on AWS Infrastructure Supply Chain Organization. In this, you will work closely with a team of Business Intelligence Engineers, Data Engineers and Data Scientists to architect data ingestion from multiple internal process, tools and systems into unified data architecture using AWS Cloud technologies. You will be using the best of available tools, including Amazon Redshift, EC2, Lambda, DynamoDB, and Elastic Search. You will be responsible for building augmented analytical solution to drive business outcome through seamless integration of analytics and business insight. Key job responsibilities In This Job, You Will Build and improve Data Warehouse / Data Mart solution by translating business requirements into robust, scalable, and supportable solutions that work well within the overall system architecture. Design and Develop Dimensional Data Models to support Data Warehouse Architecture Develop functional databases, applications, and servers to support websites on the back end Write effective APIs Perform detailed source-system analysis, source-to-target data analysis, and transformation analysis Design and build data mappings from multiple source systems to data warehouse, maintaining source data integrity Participate in the full development cycle for ETL: design, implementation, validation, documentation, and maintenance . Evolve the Data Warehouse environment within the organization, including better information delivery mechanisms and methodologies Work to develop the best technical design and approach for new solution development. About The Team Diverse Experiences Amazon values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve. Inclusive Team Culture AWS values curiosity and connection. Our employee-led and company-sponsored affinity groups promote inclusion and empower our people to take pride in what makes us unique. Our inclusion events foster stronger, more collaborative teams. Our continual innovation is fueled by the bold ideas, fresh perspectives, and passionate voices our teams bring to everything we do. Mentorship and Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Basic Qualifications Bachelor's in Computer Science, or a related field and expertise with SQL and NoSQL databases (e.g. MySQL, MongoDB, PostgreSQL, Dynamo DB), web servers (e.g. Apache, Apache Tomcat) Expertise in Python/ R and experience with DevOps tools like Docker, Kubernetes with application deployment using CI/CD. Experience with distributed version control such as Git and basic knowledge of Linux environments Preferred Qualifications Knowledge of writing and optimizing SQL queries in a business environment with large-scale, complex datasets Knowledge of AWS Infrastructure Experience in communicating with users, other technical teams, and management to collect requirements, describe software product features, and technical designs. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADSIPL - Karnataka Job ID: A2960713 Show more Show less

Posted 2 weeks ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

About Us Rexera is on a mission to transform the $36.2 trillion residential real estate industry. We're building innovative AI Agents that are streamlining real estate transactions, and we're proud to be working with some of the leading Title & Escrow companies, Lenders, and Investors in the field. Powered by recent advancements in AI, our proprietary data, and our team's extensive real estate experience, we help our customers increase operational efficiency, decrease risks and costs, close more files, spend more time with their customers, and increase revenue through referrals. We're passionate about redefining how real estate transactions are conducted, creating more efficient and effective processes for all stakeholders. If you're excited about the potential of AI in real estate and want to be part of this innovative journey, we invite you to join our team. Discover more about our mission and our impactful work at https://www.rexera.com/ and connect with us on LinkedIn at https://www.linkedin.com/company/rexera/mycompany/. Be part of the team that's building the future of real estate, one AI-powered transaction at a time! Rexera was founded in 2020 and raised over $6 million in seed money from investors such as Inventus Capital, SVQuad, Dheeraj Pandey, and more. It is led by its co-founders Vishrut Malhotra (ex-BlackRock and AQR), Anton Tonev (ex-Morgan Stanley and AQR), and Atin Hindocha (ex-NetApp and BlackBerry). Rexera is headquartered in California and has offices in India and Bulgaria. About The Role We are looking to bring in a strong candidate for the Data & BI Analyst position. This person should have deep SQL skills, be comfortable working with BI platforms (like Looker or Superset), and have solid Excel abilities. They will be responsible for understanding, maintaining, and improving our existing dashboards, fixing any data issues, and implementing new reporting where required. Key Responsibilities Understand, maintain, and improve existing dashboards Fix data issues and implement new reporting where required Support ongoing dashboard improvements and business reporting needs Troubleshoot issues independently Strengthen internal BI capabilities Required Skills & Qualifications Minimum of 2 years’ experience in the field Strong SQL proficiency – ability to write complex queries and work with large datasets Experience with at least one BI platform – e.g., Looker, Superset, Tableau, Power BI Ability to understand and fix existing dashboards (e.g., broken metrics, visual issues, incorrect filters) Strong Excel skills – including formulas, pivot tables, and data analysis Analytical mindset with ability to understand and interpret data structures Clear communication and ability to document work for technical and non-technical stakeholders Preferred Qualifications Experience working with Superset Familiarity with data modeling principles Exposure to Python or scripting for automation Familiarity with version control tools like Git Experience pulling/pushing data using APIs Understanding of cloud data platforms like BigQuery, Redshift, or Snowflake Good experience building data warehouses Education and Experience Bachelor’s degree in Computer Science, Information Technology, Statistics, Mathematics, Engineering, Economics, or a related field A Master’s degree or certification in Data Analytics, Business Intelligence, or related fields is a plus 2–4 years of relevant experience in data analysis, BI tools, or reporting function Location and Shift Timings Location: HSR Layout, Bangalore Shift timings: 11:00 AM – 8:00 PM IST What We Offer Competitive compensation package with bonus opportunities A collaborative, intellectually stimulating environment working with teams in the US, Bulgaria, and India Clear pathways for career advancement and internal mobility Comprehensive paid time off including vacation, sick leave, and holidays Why Join Rexera? At Rexera, we celebrate individuality and encourage innovation. We believe in nurturing your strengths, recognizing your efforts, and ensuring a work environment free from any form of discrimination. Join us to be part of a culture that values diversity, growth, and collaboration. Skills: python,version control,apis,excel,superset,snowflake,data modeling,python automation,cloud data platforms,bigquery,data modeling principles,sql,bi platforms,looker,redshift,data analysis Show more Show less

Posted 2 weeks ago

Apply

2.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Description Amazon transportation is seeking a highly skilled and a motivated team player to be part of our dynamic team which builds software to provide visibility into package movement across the network and helps improve end user experience by taking appropriate course corrections based on the recommendations made by our systems. BIE plays a crucial role, can build and analyze data systems, spot important business pattern, create metrics that measure success and develop ways to validate our findings. BIE will work closely with the Transportation teams including Shiptech and its internal clients to provide analytical support for the business. Key job responsibilities Manage and execute entire projects or components of large projects from start to finish including project management, data gathering and manipulation, synthesis and modeling, problem solving, and communication of insights and recommendations Contribute to the design, implementation, and delivery of complex BI solutions Own the design, development, and maintenance of ongoing metrics, reports, analyses, dashboards, etc. to drive key business decisions. Ensure data accuracy by validating data for new and existing tools. Learn and understand a broad range of Amazon’s data resources and know how, when, and which to use and which not to use. Use data mining, model building to create solutions for the growing analytical needs Drive projects on machine learning, data automation, governance and standardization Develop innovative BI and data analytics solutions leveraging generative AI (AWS Bedrock) to build intelligent knowledge bases and conversational interfaces for enhanced business insights and recommendations Understand trends related to Amazon business and recommend strategies to stakeholders to help drive business growth. Reporting of key insight trends, using statistical rigor to simplify and inform the larger team of noteworthy story lines. Respond with urgency to high priority requests from senior business leaders. Basic Qualifications 2+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with one or more industry analytics visualization tools (e.g. Excel, Tableau, QuickSight, MicroStrategy, PowerBI) and statistical methods (e.g. t-test, Chi-squared) Experience with scripting language (e.g., Python, Java, or R) Preferred Qualifications Master's degree, or Advanced technical degree Knowledge of data modeling and data pipeline design Experience with statistical analysis, co-relation analysis Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A2957898 Show more Show less

Posted 2 weeks ago

Apply

2.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Description Amazon strives to be Earth's most customer-centric company where people can find and discover virtually anything they want to buy online. By giving customers more of what they want - low prices, vast selection, and convenience - Amazon continues to grow and evolve as a world-class e-commerce platform. The AOP team is an integral part of this and strives to provide Analytical Capabilities to fulfil all customer processes in the IN-ECCF regions. The Business intelligence engineer would support the analytical requirements of the IN-ECCF Operations Analytics team. Candidate will be responsible for conducting deep dive analyses to solve complex business problems. He/ she will also be responsible for creating robust/automated reporting frameworks to increase visibility into data and enable data driven decision making. Another key aspect of the job is to unearth insights from data to help the operations team in driving process excellence. This position requires excellent statistical knowledge, superior analytical abilities, good knowledge of business intelligence solutions and exposure to efficient data engineering practices. The BIE will also be a good stakeholder manager as he/she will have to work closely with Ops stakeholders. Candidate should be comfortable with ambiguity, capable of working in a fast-paced environment, continuously improving technical skills to meet business needs, possess strong attention to detail and be able to collaborate with customers to understand and transform business problems into requirements and deliverables. Basic Qualifications 2+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with one or more industry analytics visualization tools (e.g. Excel, Tableau, QuickSight, MicroStrategy, PowerBI) and statistical methods (e.g. t-test, Chi-squared) Experience with scripting language (e.g., Python, Java, or R) Preferred Qualifications Master's degree, or Advanced technical degree Knowledge of data modeling and data pipeline design Experience with statistical analysis, co-relation analysis Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ASSPL - Telangana - D82 Job ID: A2877925 Show more Show less

Posted 2 weeks ago

Apply

2.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Description Come help Amazon create cutting-edge data and science-driven technologies for delivering packages to the doorstep of our customers! The Last Mile Routing & Planning organization builds the software, algorithms and tools that make the “magic” of home delivery happen: our flow, sort, dispatch and routing intelligence systems are responsible for the billions of daily decisions needed to plan and execute safe, efficient and frustration-free routes for drivers around the world. Our team supports deliveries (and pickups!) for Amazon Logistics, Prime Now, Amazon Flex, Amazon Fresh, Lockers, and other new initiatives. As part of the Last Mile Science & Technology organization, you’ll partner closely with Product Managers, Data Scientists, and Software Engineers to drive improvements in Amazon's Last Mile delivery network. You will leverage data and analytics to generate insights that accelerate the scale, efficiency, and quality of the routes we build for our drivers through our end-to-end last mile planning systems. You will present your analyses, plans, and recommendations to senior leadership and connect new ideas to drive change. Analytical ingenuity and leadership, business acumen, effective communication capabilities, and the ability to work effectively with cross-functional teams in a fast paced environment are critical skills for this role. Responsibilities Create actionable business insights through analytical and statistical rigor to answer business questions, drive business decisions, and develop recommendations to improve operations Collaborate with Product Managers, software engineering, data science, and data engineering partners to design and develop analytic capabilities Define and govern key business metrics, build automated dashboards and analytic self-service capabilities, and engineer data-driven processes that drive business value Navigate ambiguity to develop analytic solutions and shape work for junior team members Basic Qualifications 2+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with one or more industry analytics visualization tools (e.g. Excel, Tableau, QuickSight, MicroStrategy, PowerBI) and statistical methods (e.g. t-test, Chi-squared) Experience with scripting language (e.g., Python, Java, or R) Preferred Qualifications Master's degree, or Advanced technical degree Knowledge of data modeling and data pipeline design Experience with statistical analysis, co-relation analysis Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A2903244 Show more Show less

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Description AOP team within Amazon Transportation is looking for an innovative, hands-on and customer-obsessed Business Intelligence Engineer for Analytics team. Candidate must be detail oriented, have superior verbal and written communication skills, strong organizational skills, excellent technical skills and should be able to juggle multiple tasks at once. Ideal candidate must be able to identify problems before they happen and implement solutions that detect and prevent outages. The candidate must be able to accurately prioritize projects, make sound judgments, work to improve the customer experience and get the right things done. This job requires you to constantly hit the ground running and have the ability to learn quickly. Primary responsibilities include defining the problem and building analytical frameworks to help the operations to streamline the process, identifying gaps in the existing process by analyzing data and liaising with relevant team(s) to plug it and analyzing data and metrics and sharing update with the internal teams. Key job responsibilities Apply multi-domain/process expertise in day to day activities and own end to end roadmap. Translate complex or ambiguous business problem statements into analysis requirements and maintain high bar throughout the execution. Define analytical approach; review and vet analytical approach with stakeholders. Proactively and independently work with stakeholders to construct use cases and associated standardized outputs Scale data processes and reports; write queries that clients can update themselves; lead work with data engineering for full-scale automation Have a working knowledge of the data available or needed by the wider business for more complex or comparative analysis Work with a variety of data sources and Pull data using efficient query development that requires less post processing (e.g., Window functions, virt usage) When needed, pull data from multiple similar sources to triangulate on data fidelity Actively manage the timeline and deliverables of projects, focusing on interactions in the team Provide program communications to stakeholders Communicate roadblocks to stakeholders and propose solutions Represent team on medium-size analytical projects in own organization and effectively communicate across teams [January 21, 2025, 1:30 PM] Dhingra, Gunjit: Day in life A day in the life Solve ambiguous analyses with less well-defined inputs and outputs; drive to the heart of the problem and identify root causes Have the capability to handle large data sets in analysis through the use of additional tools Derive recommendations from analysis that significantly impact a department, create new processes, or change existing processes Understand the basics of test and control comparison; may provide insights through basic statistical measures such as hypothesis testing Identify and implement optimal communication mechanisms based on the data set and the stakeholders involved Communicate complex analytical insights and business implications effectively About The Team AOP (Analytics Operations and Programs) team is missioned to standardize BI and analytics capabilities, and reduce repeat analytics/reporting/BI workload for operations across IN, AU, BR, MX, SG, AE, EG, SA marketplace. AOP is responsible to provide visibility on operations performance and implement programs to improve network efficiency and defect reduction. The team has a diverse mix of strong engineers, Analysts and Scientists who champion customer obsession. We enable operations to make data-driven decisions through developing near real-time dashboards, self-serve dive-deep capabilities and building advanced analytics capabilities. We identify and implement data-driven metric improvement programs in collaboration (co-owning) with Operations teams. Basic Qualifications 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with data modeling, warehousing and building ETL pipelines Experience in Statistical Analysis packages such as R, SAS and Matlab Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling Preferred Qualifications Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ASSPL - Karnataka - B56 Job ID: A2877941 Show more Show less

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

India

Remote

Linkedin logo

🚀 We’re Hiring: Data Engineer | Join Our Team! Location: Remote We’re looking for a skilled and motivated Data Engineer to join our growing team and help us build scalable data pipelines, optimize data platforms, and enable real-time analytics. 🧠 What You'll Do 🔹 Design, develop, and maintain robust data pipelines using tools like Databricks, PySpark, SQL, and Azure Data Factory 🔹 Collaborate with data scientists, analysts, and business teams to ensure data is accessible, clean, and actionable 🔹 Work on modern data lakehouse architectures and contribute to data governance and quality frameworks 🎯 Tech Stack ☁️ Azure | 🧱 Databricks | 🐍 PySpark | 📊 SQL 👤 What We’re Looking For ✅ 3+ years experience in data engineering or analytics engineering ✅ Hands-on with cloud data platforms and large-scale data processing ✅ Strong problem-solving mindset and a passion for clean, efficient data design Job Description: Min 3 years of experience in modern data engineering/data warehousing/data lakes technologies on cloud platforms like Azure, AWS, GCP, Data Bricks etc. Azure experience is preferred over other cloud platforms. 5 years of proven experience with SQL, schema design and dimensional data modelling Solid knowledge of data warehouse best practices, development standards and methodologies Experience with ETL/ELT tools like ADF, Informatica, Talend etc., and data warehousing technologies like Azure Synapse, Azure SQL, Amazon redshift, Snowflake, Google Big Query etc. Strong experience with big data tools (Databricks, Spark etc..) and programming skills in PySpark and Spark SQL. Be an independent self-learner with “let’s get this done” approach and ability to work in Fast paced and Dynamic environment. Excellent communication and teamwork abilities. Nice-to-Have Skills: Event Hub, IOT Hub, Azure Stream Analytics, Azure Analysis Service, Cosmo DB knowledge. SAP ECC /S/4 and Hana knowledge. Intermediate knowledge on Power BI Azure DevOps and CI/CD deployments, Cloud migration methodologies and processes Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Andhra Pradesh, India

On-site

Linkedin logo

At PwC, our people in infrastructure focus on designing and implementing robust, secure IT systems that support business operations. They enable the smooth functioning of networks, servers, and data centres to optimise performance and minimise downtime. In infrastructure engineering at PwC, you will focus on designing and implementing robust and scalable technology infrastructure solutions for clients. Your work will involve network architecture, server management, and cloud computing experience. Data Modeler Job Description Looking for candidates with a strong background in data modeling, metadata management, and data system optimization. You will be responsible for analyzing business needs, developing long term data models, and ensuring the efficiency and consistency of our data systems. Key areas of expertise include Analyze and translate business needs into long term solution data models. Evaluate existing data systems and recommend improvements. Define rules to translate and transform data across data models. Work with the development team to create conceptual data models and data flows. Develop best practices for data coding to ensure consistency within the system. Review modifications of existing systems for cross compatibility. Implement data strategies and develop physical data models. Update and optimize local and metadata models. Utilize canonical data modeling techniques to enhance data system efficiency. Evaluate implemented data systems for variances, discrepancies, and efficiency. Troubleshoot and optimize data systems to ensure optimal performance. Strong expertise in relational and dimensional modeling (OLTP, OLAP). Experience with data modeling tools (Erwin, ER/Studio, Visio, PowerDesigner). Proficiency in SQL and database management systems (Oracle, SQL Server, MySQL, PostgreSQL). Knowledge of NoSQL databases (MongoDB, Cassandra) and their data structures. Experience working with data warehouses and BI tools (Snowflake, Redshift, BigQuery, Tableau, Power BI). Familiarity with ETL processes, data integration, and data governance frameworks. Strong analytical, problem-solving, and communication skills. Qualifications Bachelor's degree in Engineering or a related field. 5 to 9 years of experience in data modeling or a related field. 4+ years of hands-on experience with dimensional and relational data modeling. Expert knowledge of metadata management and related tools. Proficiency with data modeling tools such as Erwin, Power Designer, or Lucid. Knowledge of transactional databases and data warehouses. Preferred Skills Experience in cloud-based data solutions (AWS, Azure, GCP). Knowledge of big data technologies (Hadoop, Spark, Kafka). Understanding of graph databases and real-time data processing. Certifications in data management, modeling, or cloud data engineering. Excellent communication and presentation skills. Strong interpersonal skills to collaborate effectively with various teams. Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Description At Amazon, we strive to be the most innovative and customer centric company on the planet. Come work with us to develop innovative products, tools and research driven solutions in a fast-paced environment by collaborating with smart and passionate leaders, program managers and software developers. This role is based out of our Hyderabad corporate office and is for an passionate, dynamic, analytical, innovative, hands-on, and customer-obsessed Business Analyst. Your team will be comprised of Business Analysts, Data Engineers, Business Intelligence Engineers based in Hyderabad, Europe and the US. Key job responsibilities The ideal candidate will have experience working with large datasets and distributed computing technologies. The candidate relishes working with large volumes of data, enjoys the challenge of highly complex technical contexts, and, is passionate about data and analytics. He/she should be an expert with data modeling, ETL design and business intelligence tools, has hand-on knowledge on columnar databases such as Redshift and other related AWS technologies. He/she passionately partners with the customers to identify strategic opportunities in the field of data analysis & engineering. He/she should be a self-starter, comfortable with ambiguity, able to think big (while paying careful attention to detail) and enjoys working in a fast-paced team that continuously learns and evolves on a day to day basis. A day in the life Key Job Responsibilities This role primarily focuses on deep-dives, creating dashboards for the business, working with different teams to develop and track metrics and bridges. Design, develop and maintain scalable, automated, user-friendly systems, reports, dashboards, etc. that will support our analytical and business needs In-depth research of drivers of the Localization business Analyze key metrics to uncover trends and root causes of issues Suggest and build new metrics and analysis that enable better perspective on business Capture the right metrics to influence stakeholders and measure success Develop domain expertise and apply to operational problems to find solution Work across teams with different stakeholders to prioritize and deliver data and reporting Recognize and adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation Maintain BI architecture including our AWS account, database and various analytics tools. Basic Qualifications SQL mastery is a must Some scripting knowledge (python, R, scala) Stakeholder management Dashboarding (Excel, Quicksight, Power BI) Data analysis and statistics KPI design Preferred Qualifications Power BI and Power Pivot in Excel AWS fundamentals (IAM, S3, ..) Python Apache Spark / Scala Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A2942382 Show more Show less

Posted 2 weeks ago

Apply

5.0 - 7.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Job Title: Assistant Manager - Data Engineer Location: Andheri (Mumbai) Job Type: Full-Time Department: IT Position Overview: The Assistant Manager - Data Engineer will play a pivotal role in the design, development, and maintenance of data pipelines that ensure the efficiency, scalability, and reliability of our data infrastructure. This role will involve optimizing and automating ETL/ELT processes, as well as developing and refining databases, data warehouses, and data lakes. As an Assistant Manager, you will also mentor junior engineers and collaborate closely with cross-functional teams to support business goals and drive data excellence. Key Responsibilities: Data Pipeline Development: Design, build, and maintain efficient, scalable, and reliable data pipelines to support data analytics, reporting, and business intelligence initiatives. Database and Data Warehouse Management: Develop, optimize, and manage databases, data warehouses, and data lakes to enhance data accessibility and business decision-making. ETL/ELT Optimization: Automate and optimize data extraction, transformation, and loading (ETL/ELT) processes, ensuring efficient data flow and improved system performance. Data Modeling & Architecture: Develop and maintain data models to enable structured data storage, analysis, and reporting in alignment with business needs. Workflow Management Systems: Implement, optimize, and maintain workflow management tools (e.g., Apache Airflow, Talend) to streamline data engineering tasks and improve operational efficiency. Team Leadership & Mentorship: Guide, mentor, and support junior data engineers to enhance their skills and contribute effectively to projects. Collaboration with Cross-Functional Teams: Work closely with data scientists, analysts, business stakeholders, and IT teams to understand requirements and deliver solutions that align with business objectives. Performance Optimization: Continuously monitor and optimize data pipelines and storage solutions to ensure maximum performance and cost efficiency. Documentation & Process Improvement: Create and maintain documentation for data models, workflows, and systems. Contribute to the continuous improvement of data engineering practices. Qualifications: Educational Background: B.E., B.Tech., MCA Professional Experience: At least 5 to 7 years of experience in a data engineering or similar role, with hands-on experience in building and optimizing data pipelines, ETL processes, and database management. Technical Skills: Proficiency in Python and SQL for data processing, transformation, and querying. Experience with modern data warehousing solutions (e.g., Amazon Redshift, Snowflake, Google BigQuery, Azure Data Lake). Strong background in data modeling (dimensional, relational, star/snowflake schema). Hands-on experience with ETL tools (e.g., Apache Airflow, Talend, Informatica) and workflow management systems . Familiarity with cloud platforms (AWS, Azure, Google Cloud) and distributed data processing frameworks (e.g., Apache Spark). Data Visualization & Exploration: Familiarity with data visualization tools (e.g., Tableau, Power BI) for analysis and reporting. Leadership Skills: Demonstrated ability to manage and mentor a team of junior data engineers while fostering a collaborative and innovative work environment. Problem-Solving & Analytical Skills: Strong analytical and troubleshooting skills with the ability to optimize complex data systems for performance and scalability. Experience in Pharma/Healthcare (preferred but not required): Knowledge of the pharmaceutical industry and experience with data in regulated environments Desired Skills: Familiarity with industry-specific data standards and regulations. Experience working with machine learning models or data science pipelines is a plus. Strong communication skills with the ability to present technical data to non-technical stakeholders. Why Join Us: Impactful Work: Contribute to the pharmaceutical industry by improving data-driven decisions that impact public health. Career Growth: Opportunities to develop professionally in a fast-growing industry and company. Collaborative Environment: Work with a dynamic and talented team of engineers, data scientists, and business stakeholders. Competitive Benefits: Competitive salary, health benefits and more. Show more Show less

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Description Amazon’s eCommerce Foundation (eCF) organization is responsible for the core components that drive the Amazon website and customer experience. Serving millions of customer page views and orders per day, eCF builds for scale. As an organization within eCF, the Business Data Technologies (BDT) group is no exception. We collect petabytes of data from thousands of data sources inside and outside Amazon including the Amazon catalog system, inventory system, customer order system, page views on the website. We provide interfaces for our internal customers to access and query the data hundreds of thousands of times per day. We build scalable solutions that grow with the Amazon business. BDT team is building an enterprise-wide Big Data Marketplace leveraging AWS technologies. We work closely with AWS teams like EMR/Spark, Redshift, Athena, S3 and others. We are developing innovative products including the next-generation of data catalog, data discovery engine, data transformation platform, and more with state-of-the-art user experience. We’re looking for top Front end engineers to build them from the ground up. This is a hands-on position where you will do everything from designing & building UI components that is used by teams across SDO. You will also mentor engineers and work with the most sophisticated customers in the business to help them get the best results. You need to not only be a top front end developer with excellent programming skills, have an understanding of scaling in design and parallelization, and a stellar record of delivery, but also excel at leadership, customer obsession and have a real passion for massive-scale computing. Come help us build for the future of Data! Key job responsibilities Your Responsibilities Will Include Driving strategic decision-making on critical UI feature development, from design through implementation and to testing and deployment Coaching and mentoring peers who share a common passion for front-end work, but may have little to no experience in its development Maintaining and raising the technical bar across all software development, but in particular around front-end work as you work to help foster a front-end engineering community Working closely with software and science managers, product owners, and business stakeholders across different teams on a daily basis to define system architecture, elaborate user stories and discuss business and technical trade-offs Collaborate with experienced cross-disciplinary Amazonians to conceive, design, and bring innovative products and services to market. Drive big ideas to improve the customer experience we deliver across multiple touch points Develop high quality, testable, and maintainable user interfaces. Work in an agile environment to deliver high-quality software. A day in the life This FEE in Data Comprehension team would lead product and front end initiatives within the team and beyond by partnering with internal and external stakeholders and teams. This FEE would need to come up with technical strategies and design for complex customer problems by leveraging out of box solutions to enable faster roll outs. They will deliver working front end systems consisting of multiple features spanning the full software lifecycle including design, implementation, testing, deployment, and maintenance strategy. The problems they need to solve do not start with a defined technology strategy, and may have conflicting constraints. As FE technology lead in the team, they will review other SDEs’ work to ensure it fits into the bigger picture and is well designed, extensible, performant, and secure. FEE will solve challenging problems at Amazon and drive the delivery of large frontend features from planning through implementation while managing ambiguity and the pace of a company where development cycles are measured in weeks, not years. Basic Qualifications 4+ years of non-internship professional front end, web or mobile software development using JavaScript, HTML and CSS experience Experience using JavaScript frameworks such as angular and react Bachelor's degree in computer science or equivalent Preferred Qualifications Experience with common front-end technologies such as HTML, CSS, JS, TypeScript, and Node - Decompose a problem into clear system, API, and UX design actions. - Passion for operational excellence Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A2943508 Show more Show less

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Description Amazon’s eCommerce Foundation (eCF) organization is responsible for the core components that drive the Amazon website and customer experience. Serving millions of customer page views and orders per day, eCF builds for scale. As an organization within eCF, the Business Data Technologies (BDT) group is no exception. We collect petabytes of data from thousands of data sources inside and outside Amazon including the Amazon catalog system, inventory system, customer order system, page views on the website. We provide interfaces for our internal customers to access and query the data hundreds of thousands of times per day. We build scalable solutions that grow with the Amazon business. BDT team is building an enterprise-wide Big Data Marketplace leveraging AWS technologies. We work closely with AWS teams like EMR/Spark, Redshift, Athena, S3 and others. We are developing innovative products including the next-generation of data catalog, data discovery engine, data transformation platform, and more with state-of-the-art user experience. We’re looking for top Front end engineers to build them from the ground up. This is a hands-on position where you will do everything from designing & building UI components that is used by teams across SDO. You will also mentor engineers and work with the most sophisticated customers in the business to help them get the best results. You need to not only be a top front end developer with excellent programming skills, have an understanding of scaling in design and parallelization, and a stellar record of delivery, but also excel at leadership, customer obsession and have a real passion for massive-scale computing. Come help us build for the future of Data! Key job responsibilities Your Responsibilities Will Include Driving strategic decision-making on critical UI feature development, from design through implementation and to testing and deployment Coaching and mentoring peers who share a common passion for front-end work, but may have little to no experience in its development Maintaining and raising the technical bar across all software development, but in particular around front-end work as you work to help foster a front-end engineering community Working closely with software and science managers, product owners, and business stakeholders across different teams on a daily basis to define system architecture, elaborate user stories and discuss business and technical trade-offs Collaborate with experienced cross-disciplinary Amazonians to conceive, design, and bring innovative products and services to market. Drive big ideas to improve the customer experience we deliver across multiple touch points Develop high quality, testable, and maintainable user interfaces. Work in an agile environment to deliver high-quality software. A day in the life This FEE in Data Comprehension team would lead product and front end initiatives within the team and beyond by partnering with internal and external stakeholders and teams. This FEE would need to come up with technical strategies and design for complex customer problems by leveraging out of box solutions to enable faster roll outs. They will deliver working front end systems consisting of multiple features spanning the full software lifecycle including design, implementation, testing, deployment, and maintenance strategy. The problems they need to solve do not start with a defined technology strategy, and may have conflicting constraints. As FE technology lead in the team, they will review other SDEs’ work to ensure it fits into the bigger picture and is well designed, extensible, performant, and secure. FEE will solve challenging problems at Amazon and drive the delivery of large frontend features from planning through implementation while managing ambiguity and the pace of a company where development cycles are measured in weeks, not years. Basic Qualifications 4+ years of non-internship professional front end, web or mobile software development using JavaScript, HTML and CSS experience Experience using JavaScript frameworks such as angular and react Bachelor's degree in computer science or equivalent Preferred Qualifications Experience with common front-end technologies such as HTML, CSS, JS, TypeScript, and Node - Decompose a problem into clear system, API, and UX design actions. - Passion for operational excellence Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A2943511 Show more Show less

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Description Does the prospect of dealing with massive volumes of data excite you? Do you want to lead scalable data engineering solutions using AWS technologies? Do you want to create the next-generation tools for intuitive data access? Amazon's Finance Tech team needs a Data Engineer to shape the future of the Amazon finance data platform by working with stakeholders in North America, Asia and Europe. The team is committed to building the next generation big data platform that will be one of the world's largest finance data warehouses by volume to support Amazon's rapidly growing and dynamic businesses, and use it to deliver the BI applications which will have an immediate influence on day-to-day decision making. Members of the team will be challenged to innovate using the latest big data techniques. We are looking for a passionate data engineer to develop a robust, scalable data model and optimize the consumption of data sources required to ensure accurate and timely reporting for the Amazon businesses. You will share in the ownership of the technical vision and direction for advanced reporting and insight products. You will work with top-notch technical professionals developing complex systems at scale and with a focus on sustained operational excellence. We are looking for people who are motivated by thinking big, moving fast, and exploring business insights. If you love to implement solutions to hard problems while working hard, having fun, and making history, this may be the opportunity for you. Key job responsibilities Design, implement, and support a platform providing secured access to large datasets. Interface with tax, finance and accounting customers, gathering requirements and delivering complete BI solutions. Collaborate with Finance Analysts to recognize and help adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation. Model data and metadata to support ad-hoc and pre-built reporting. Own the design, development, and maintenance of ongoing metrics, reports, analyses, dashboards, etc. to drive key business decisions. Tune application and query performance using profiling tools and SQL. Analyze and solve problems at their root, stepping back to understand the broader context. Learn and understand a broad range of Amazon’s data resources and know when, how, and which to use and which not to use. Keep up to date with advances in big data technologies and run pilots to design the data architecture to scale with the increased data volume using AWS. Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for datasets. Triage many possible courses of action in a high-ambiguity environment, making use of both quantitative analysis and business judgment. Basic Qualifications 3+ years of data engineering experience Experience with data modeling, warehousing and building ETL pipelines Experience with SQL Preferred Qualifications Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A2957000 Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

DAZN is a tech-first sport streaming platform that reaches millions of users every week. We are challenging a traditional industry and giving power back to the fans. Our new Hyderabad tech hub will be the engine that drives us forward to the future. We’re pushing boundaries and doing things no-one has done before. Here, you have the opportunity to make your mark and the power to make change happen - to make a difference for our customers. When you join DAZN you will work on projects that impact millions of lives thanks to your critical contributions to our global products This is the perfect place to work if you are passionate about technology and want an opportunity to use your creativity to help grow and scale a global range of IT systems, Infrastructure, and IT Services. Our cutting-edge technology allows us to stream sports content to millions of concurrent viewers globally across multiple platforms and devices. DAZN’s Cloud based architecture unifies a range of technologies to deliver a seamless user experience and support a global user base and company infrastructure. This role will be based in our brand-new Hyderabad office. Join us in India’s beautiful “City of Pearls” and bring your ambition to life. We are seeking a skilled and experienced Data Engineer to join our fast-paced and innovative Data Science team. This role involves building and maintaining data pipelines across multiple cloud-based data platforms. Requirements: A minimum of 5 years of total experience, with at least 3–4 years specifically in Data Engineering on a cloud platform. Key Skills & Experience: Proficiency with AWS services such as Glue, Redshift, S3, Lambda, RDS , Amazon Aurora ,DynamoDB ,EMR, Athena, Data Pipeline , Batch Job. Strong expertise in: SQL and Python DBT and Snowflake OpenSearch, Apache NiFi, and Apache Kafka In-depth knowledge of ETL data patterns and Spark-based ETL pipelines. Advanced skills in infrastructure provisioning using Terraform and other Infrastructure-as-Code (IaC) tools. Hands-on experience with cloud-native delivery models, including PaaS, IaaS, and SaaS. Proficiency in Kubernetes, container orchestration, and CI/CD pipelines. Familiarity with GitHub Actions, GitLab, and other leading DevOps and CI/CD solutions. Experience with orchestration tools such as Apache Airflow and serverless/FaaS services. Exposure to NoSQL databases is a plus Show more Show less

Posted 2 weeks ago

Apply

1.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Description Have you ever thought about what it takes to detect and prevent fraudulent activity among hundreds of millions of e-commerce transactions across the globe? What would you do to increase trust in an online marketplace where millions of buyers and sellers transact? How would you build systems that evolve over time to proactively identify and neutralize new and emerging fraud threats? Our mission in Buyer Risk Prevention is to make Amazon the safest place to transact online. Buyer Risk Prevention safeguards every financial transaction across all Amazon sites, while striving to ensure that these efforts are transparent to our legitimate customers. As such, Buyer Risk Prevention designs and builds the software systems, risk models and operational processes that minimize risk and maximize trust in Amazon.com. As a Business Analyst in Buyer Risk Prevention, you will be responsible for analyzing terabytes of data to identify specific instances of risk, broader risk trends and points of customer friction, developing scalable solutions for prevention. You will need to collaborate effectively with business and product leaders within BRP and cross-functional teams to solve problems, create operational efficiencies, and deliver successfully against high organizational standards. You should be able to apply a breadth of tools, data sources, and analytical techniques to answer a wide range of high-impact business questions and proactively present new insights in concise and effective manner. In addition you will be responsible for building a robust set of operational and business metrics and will utilize metrics to determine improvement opportunities. You should be an effective communicator capable of independently driving issues to resolution and communicating insights to non-technical audiences. This is a high impact role with goals that directly impacts the bottom line of the business. Responsibilities Understand the various operations across Payment Risk Design and develop highly available dashboards and metrics using SQL and Excel/Tableau Perform business analysis and data queries using scripting languages like R, Python etc Understand the requirements of stakeholders and map them with the data sources/data warehouse Own the delivery and backup of periodic metrics, dashboards to the leadership team Draw inferences and conclusions, and create dashboards and visualizations of processed data, identify trends, anomalies Execute high priority (i.e. cross functional, high impact) projects to create robust, scalable analytics solutions and frameworks with the help of Analytics/BIE managers Perform business analysis and data queries using appropriate tools Work closely with internal stakeholders such as business teams, engineering teams, and partner teams and align them with respect to your focus area Execute analytical projects and understanding of analytical methods (like ANOVA, Distribution theory, regression, forecasting, Machine Learning Techniques, etc.) Draw inferences and insights from the data using EDA and data manipulations using advanced SQL for business reviews Key job responsibilities Understand the various operations across Payment Risk Design and develop highly available dashboards and metrics using SQL and Excel/Tableau/QuickSight Understand the requirements of stakeholders and map them with the data sources/data warehouse Own the delivery and backup of periodic metrics, dashboards to the leadership team Draw inferences and conclusions, and create dashboards and visualizations of processed data, identify trends, anomalies Execute high priority (i.e. cross functional, high impact) projects to improve operations performance with the help of Analytics managers Perform business analysis and data queries using appropriate tools Work closely with internal stakeholders such as business teams, engineering teams, and partner teams and align them with respect to your focus area Execute analytical projects and understanding of analytical methods (like ANOVA, Distribution theory, regression, forecasting, Machine Learning Techniques, etc.) Basic Qualifications Bachelor's degree in business, engineering, statistics, computer science, mathematics or related field 1+ years of tax, finance or a related analytical field experience Experience creating complex SQL queries joining multiple datasets, ETL DW concepts Experience defining requirements and using data and metrics to draw business insights Experience demonstrating problem solving and root cause analysis Experience with reporting and Data Visualization tools such as Quick Sight / Tableau / Power BI or other BI packages Knowledge of Microsoft Excel at an advanced level, including: pivot tables, macros, index/match, vlookup, VBA, data links, etc. Experience using databases with a large-scale data set 1+ year of experience working in Analytics / Business Intelligence environment with prior experience of design and execution of analytical projects Preferred Qualifications Experience in Amazon Redshift and other AWS technologies Experience scripting for automation (e.g., Python, Perl, Ruby) Experience using Python or R for data analysis or statistical tools such as SAS Experience in e-commerce / on-line companies in fraud / risk control functions Analytical mindset and ability to see the big picture and influence others Detail-oriented and must have an aptitude for solving unstructured problems. The role will require the ability to extract data from various sources and to design/construct/execute complex analyses to finally come up with data/reports that help solve the business problem Good oral, written and presentation skills combined with the ability to be part of group discussions and explaining complex solutions Ability to apply analytical, computer, statistical and quantitative problem solving skills is required Ability to work effectively in a multi-task, high volume environment Ability to be adaptable and flexible in responding to deadlines and workflow fluctuations Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - BLR 14 SEZ Job ID: A2980661 Show more Show less

Posted 2 weeks ago

Apply

4.0 years

4 - 8 Lacs

Hyderābād

On-site

GlassDoor logo

Location Hyderabad, Telangana, India Category Accounting / Finance Careers Job Id JREQ188357 Job Type Full time Hybrid We are seeking a highly experienced Senior Analyst to help guide us in our quest with our global, regional, and functional commercial policy implementation, reporting & governance projects. This successful candidate will contribute by building metrics, analyzing processes, workflows, and systems with the objective of identifying opportunities for either improvement or automation. Our ideal candidate is comfortable working with all levels of management to gain an in-depth understanding of our strategy and improving customer experience. This role requires close collaboration with product, segment partners, product marketing, customer to cash, sales, marketing, technology, and finance areas. This position resides in the Commercial Excellence organization and reports to the Manager of Commercial Policy Reporting & Governance. About the Role In this role as a Senior Analyst Commercial Policy Reporting & Governance, you will: Improve, execute, and effectively communicate significant analyses that identifies meaningful trends and opportunities across the business. Participate in regular meetings with stakeholders & management, assessing and addressing issues to identify and implement improvements toward efficient operations. Provide strong and timely business analytic support to business partners and various organizational stakeholders. Develop actionable road maps for improving workflows and processes. Effectively work with partners across the business to develop processes for capturing project activity, creating metrics driven dashboards for specific use cases, behaviors and evaluating the data for process improvement recommendations. Collaborate with Project Leads, Managers, and Business partners to determine schedules and project timelines ensuring alignments across all areas of the business. Drive commercial strategy and policy alignment with fast changing attributes, while managing reporting, tracking and governance best practices. Identify, assess, manage, and communicate risks while laying out mitigation plan and course corrections where appropriate. Provide insightful diagnostics and actionable insights to the leadership team in a proactive manner by spotting trends, questioning data and asking questions to understand underlying drivers. Proactively identify trends for future governance & reporting needs while presenting ideas to CE Leadership for new areas of opportunity to drive value. Prepare, analyze, and summarize various weekly, monthly, and periodic operational results for use by various key stakeholders, creating reports, specifications, instructions, and flowcharts. Conduct full lifecycle of analytics projects, including pulling, manipulating, and exporting data from project requirements documentation to design and execution. About You You’re a fit for the role of Senior Analyst Commercial Policy Reporting & Governance, if your background includes: Bachelor’s degree required, preferably in Computer Science, Mathematics, Business management, or economics. 4 to 6+ years of professional experience in a similar role. The role requires the candidate to work from 2 pm - 11 pm IST. Willing to work in hybrid mode, Work from Office Twice a week. Proven project management skills related planning and overseeing projects from the initial ideation through to completion. Proven ability to take complex and disparate data sets and create streamlined and efficient data lakes with connected and routinized cadence. Advanced level skills in the following systems: Power BI, Snowflake, Redshift, Salesforce.com, EDW, Excel, MS PowerPoint, and Alteryx/similar middleware data transformation tools. Familiarity with contract lifecycle management tools like Conga CLM, HighQ CLM etc. Ability to quickly draw insights into trends in data and make recommendations to drive productivity and efficiency. Exceptional verbal, written, and visual communication skills Experience managing multiple projects simultaneously within a matrix organization, adhering to deadlines in a fast-paced environment Ability to deploy influencing techniques to drive cross-functional alignment and change across broad audience Ability to be flexible with working hours to support ever-changing demands of the business #LI-GS2 What’s in it For You? Hybrid Work Model: We’ve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrow’s challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits: We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our values: Obsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact: Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. About Us Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound exciting? Join us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law.

Posted 2 weeks ago

Apply

4.0 - 6.0 years

0 Lacs

Hyderābād

On-site

GlassDoor logo

India - Hyderabad JOB ID: R-216605 ADDITIONAL LOCATIONS: India - Hyderabad WORK LOCATION TYPE: On Site DATE POSTED: Jun. 03, 2025 CATEGORY: Information Systems Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Specialist IS Business Analyst, Conversational AI What you will do Let’s do this. Let’s change the world. In this vital role you will play a key role in successfully leading the engagement model between Amgen's Technology organization and Global Commercial Operations. Collaborate with Business SMEs, Data Engineers and Product Managers to lead business analysis activities, ensuring alignment with engineering and product goals on the Conversational AI product team Become a domain authority in Conversational AI technology capabilities by researching, deploying, and sustaining features built according to Amgen’s Quality System Lead the voice of the customer assessment to define business processes and product needs Work with Product Managers and customers to define scope and value for new developments Collaborate with Engineering and Product Management to prioritize release scopes and refine the Product backlog Ensure non-functional requirements are included and prioritized in the Product and Release Backlogs Facilitate the breakdown of Epics into Features and Sprint-Sized User Stories and participate in backlog reviews with the development team Clearly express features in User Stories/requirements so all team members and partners understand how they fit into the product backlog Ensure Acceptance Criteria and Definition of Done are well-defined Work closely with UX to align technical requirements, scenarios, and business process maps with User Experience designs Develop and implement effective product demonstrations for internal and external partners Maintain accurate documentation of configurations, processes, and changes Understand end-to-end data pipeline design and dataflow Apply knowledge of data structures to diagnose data issues for resolution by data engineering team Implement and supervise performance of Extract, Transform, and Load (ETL) jobs What we expect of you We are all different, yet we all use our unique contributions to serve patients. We are seeking a highly skilled and experienced Specialist IS Business Analyst with a passion for innovation and a collaborative working style that partners effectively with business and technology leaders with these qualifications. Basic Qualifications: Master’s degree with 4 - 6 years of experience in Information Systems Bachelor’s degree with 6 - 8 years of experience in Information Systems Diploma with 10 - 12 years of experience in Information Systems Experience with writing user requirements and acceptance criteria Affinity to work in a DevOps environment and Agile mind set Ability to work in a team environment, effectively interacting with others Ability to meet deadlines and schedules and be accountable Preferred Qualifications: Must-Have Skills Excellent problem-solving skills and a passion for solving complex challenges in for AI-driven technologies Experience with Agile software development methodologies (Scrum) Superb communication skills and the ability to work with senior leadership with confidence and clarity Has experience with writing user requirements and acceptance criteria in agile project management systems such as JIRA Experience in managing product features for PI planning and developing product roadmaps and user journeys Good-to-Have Skills: Demonstrated expertise in data and analytics and related technology concepts Understanding of data and analytics software systems strategy, governance, and infrastructure Familiarity with low-code, no-code test automation software Technical thought leadership Able to communicate technical or complex subject matters in business terms Jira Align experience Experience of DevOps, Continuous Integration, and Continuous Delivery methodology Soft Skills: Able to work under minimal supervision Excellent analytical and gap/fit assessment skills Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills Technical Skills: Experience with cloud-based data technologies (e.g., Databricks, Redshift, S3 buckets) AWS (similar cloud-based platforms) Experience with design patterns, data structures, test-driven development Knowledge of NLP techniques for text analysis and sentiment analysis What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Hyderābād

Remote

GlassDoor logo

Overview: As an Analyst, Data Modeler, your focus would be to partner with D&A Data Foundation team members to create data models for Global projects. This would include independently analysing project data needs, identifying data storage and integration needs/issues, and driving opportunities for data model reuse, satisfying project requirements. Role will advocate Enterprise Architecture, Data Design, and D&A standards, and best practices. You will be performing all aspects of Data Modeling working closely with Data Governance, Data Engineering and Data Architects teams. As a member of the data Modeling team, you will create data models for very large and complex data applications in public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. The primary responsibilities of this role are to work with data product owners, data management owners, and data engineering teams to create physical and logical data models with an extensible philosophy to support future, unknown use cases with minimal rework. You'll be working in a hybrid environment with in-house, on-premises data sources as well as cloud and remote systems. You will establish data design patterns that will drive flexible, scalable, and efficient data models to maximize value and reuse. Responsibilities: Complete conceptual, logical and physical data models for any supported platform, including SQL Data Warehouse, EMR, Spark, Data Bricks, Snowflake, Azure Synapse or other Cloud data warehousing technologies. Governs data design/modeling – documentation of metadata (business definitions of entities and attributes) and constructions database objects, for baseline and investment funded projects, as assigned. Provides and/or supports data analysis, requirements gathering, solution development, and design reviews for enhancements to, or new, applications/reporting. Supports assigned project contractors (both on- & off-shore), orienting new contractors to standards, best practices, and tools. Contributes to project cost estimates, working with senior members of team to evaluate the size and complexity of the changes or new development. Ensure physical and logical data models are designed with an extensible philosophy to support future, unknown use cases with minimal rework. Develop a deep understanding of the business domain and enterprise technology inventory to craft a solution roadmap that achieves business objectives, maximizes reuse. Partner with IT, data engineering and other teams to ensure the enterprise data model incorporates key dimensions needed for the proper management: business and financial policies, security, local-market regulatory rules, consumer privacy by design principles (PII management) and all linked across fundamental identity foundations. Drive collaborative reviews of design, code, data, security features implementation performed by data engineers to drive data product development. Assist with data planning, sourcing, collection, profiling, and transformation. Create Source To Target Mappings for ETL and BI developers. Show expertise for data at all levels: low-latency, relational, and unstructured data stores; analytical and data lakes; data streaming (consumption/production), data in-transit. Develop reusable data models based on cloud-centric, code-first approaches to data management and cleansing. Partner with the Data Governance team to standardize their classification of unstructured data into standard structures for data discovery and action by business customers and stakeholders. Support data lineage and mapping of source system data to canonical data stores for research, analysis and productization. Qualifications: Bachelor’s degree required in Computer Science, Data Management/Analytics/Science, Information Systems, Software Engineering or related Technology Discipline. 5+ years of overall technology experience that includes at least 2+ years of data modeling and systems architecture. Around 2+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools. 2+ years of experience developing enterprise data models. Experience in building solutions in the retail or in the supply chain space. Expertise in data modeling tools (ER/Studio, Erwin, IDM/ARDM models). Experience with integration of multi cloud services (Azure) with on-premises technologies. Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one MPP database technology such as Redshift, Synapse, Teradata or Snowflake. Experience with version control systems like GitHub and deployment & CI tools. Experience with Azure Data Factory, Databricks and Azure Machine learning is a plus. Experience of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including Dev Ops and Data Ops concepts. Familiarity with business intelligence tools (such as Power BI). Excellent verbal and written communication and collaboration skills.

Posted 2 weeks ago

Apply

2.0 years

5 - 8 Lacs

Hyderābād

On-site

GlassDoor logo

- 2+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience - Experience with data visualization using Tableau, Quicksight, or similar tools - Experience with scripting language (e.g., Python, Java, or R) - Experience building and maintaining basic data artifacts (e.g., ETL, data models, queries) - Experience applying basic statistical methods (e.g. regression) to difficult business problems - Experience gathering business requirements, using industry standard business intelligence tool(s) to extract data, formulate metrics and build reports - Track record of generating key business insights and collaborating with stakeholders When you attract people who have the DNA of pioneers and the DNA of explorers, you build a company of like-minded people who want to invent. And that’s what they think about when they get up in the morning: how are we going to work backwards from customers and build a great service or a great product” – Jeff Bezos Amazon.com’s success is built on a foundation of customer obsession. Have you ever thought about what it takes to successfully deliver millions of packages to Amazon customers seamlessly every day like a clock work? In order to make that happen, behind those millions of packages, billions of decision gets made by machines and humans. What is the accuracy of customer provided address? Do we know exact location of the address on Map? Is there a safe place? Can we make unattended delivery? Would signature be required? If the address is commercial property? Do we know open business hours of the address? What if customer is not home? Is there an alternate delivery address? Does customer have any special preference? What are other addresses that also have packages to be delivered on the same day? Are we optimizing delivery associate’s route? Does delivery associate know locality well enough? Is there an access code to get inside building? And the list simply goes on. At the core of all of it lies quality of underlying data that can help make those decisions in time. The person in this role will be a strong influencer who will ensure goal alignment with Technology, Operations, and Finance teams. This role will serve as the face of the organization to global stakeholders. This position requires a results-oriented, high-energy, dynamic individual with both stamina and mental quickness to be able to work and thrive in a fast-paced, high-growth global organization. Excellent communication skills and executive presence to get in front of VPs and SVPs across Amazon will be imperative. Key Strategic Objectives: Amazon is seeking an experienced leader to own the vision for quality improvement through global address management programs. As a Business Intelligence Engineer of Amazon last mile quality team, you will be responsible for shaping the strategy and direction of customer-facing products that are core to the customer experience. As a key member of the last mile leadership team, you will continually raise the bar on both quality and performance. You will bring innovation, a strategic perspective, a passionate voice, and an ability to prioritize and execute on a fast-moving set of priorities, competitive pressures, and operational initiatives. You will partner closely with product and technology teams to define and build innovative and delightful experiences for customers. You must be highly analytical, able to work extremely effectively in a matrix organization, and have the ability to break complex problems down into steps that drive product development at Amazon speed. You will set the tempo for defect reduction through continuous improvement and drive accountability across multiple business units in order to deliver large scale high visibility/ high impact projects. You will lead by example to be just as passionate about operational performance and predictability as you will be about all other aspects of customer experience. The successful candidate will be able to: - Effectively manage customer expectations and resolve conflicts that balance client and company needs. - Develop process to effectively maintain and disseminate project information to stakeholders. - Be successful in a delivery focused environment and determining the right processes to make the team successful. - This opportunity requires excellent technical, problem solving, and communication skills. The candidate is not just a policy maker/spokesperson but drives to get things done. - Possess superior analytical abilities and judgment. Use quantitative and qualitative data to prioritize and influence, show creativity, experimentation and innovation, and drive projects with urgency in this fast-paced environment. - Partner with key stakeholders to develop the vision and strategy for customer experience on our platforms. Influence product roadmaps based on this strategy along with your teams. - Support the scalable growth of the company by developing and enabling the success of the Operations leadership team. - Serve as a role model for Amazon Leadership Principles inside and outside the organization - Actively seek to implement and distribute best practices across the operation Knowledge of how to improve code quality and optimizes BI processes (e.g. speed, cost, reliability) Knowledge of data modeling and data pipeline design Experience in designing and implementing custom reporting systems using automation tools Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 2 weeks ago

Apply

1.0 years

4 - 10 Lacs

Hyderābād

On-site

GlassDoor logo

- Bachelor's degree or equivalent - Experience defining requirements and using data and metrics to draw business insights - Experience with SQL or ETL - 1+ years of Excel or Tableau (data manipulation, macros, charts and pivot tables) experience Amazon's Last Mile Analytics & Quality (LMAQ) Maps team is building data-driven solutions to power the Last Mile delivery network that will serve hundreds of millions of customers worldwide. The Analytics team develops systems that model and optimize delivery operations through complex navigation and mapping datasets. The team specializes in processing and analyzing large-scale map and routing data across global markets. We work cross-functionally to seamlessly analyze and enhance last mile delivery network efficiency and service quality through sophisticated data processing pipelines. Our team is seeking a passionate and data-driven Business Analyst with experience in handling large-scale datasets to lead our efforts in enhancing driver experience and operational efficiency through advanced business analytics. This role is inherently cross-functional- you will work closely with engineering, operations, product teams and other stakeholders on last mile delivery challenges. Through close collaboration and by conducting analysis using statistical techniques and data visualizations, you will drive these challenges to resolution. The ideal candidate has a background in business analytics, experience with large-scale data processing, logistics understanding, project management skills, and a strong customer-centric approach to drive improvements in last-mile delivery. This job will require strong communication skills while having the ability to work independently in an evolving environment. Passion and drive for customer service is a must. Key job responsibilities • Analyze complex business problems and develop data-driven solutions using SQL, Python, or R • Handle and analyze large-scale navigation datasets, map datasets and map attributes • Run and automate ETL jobs for processing and integrating large scale datasets • Implement quality control measures for navigation and mapping data • Develop dashboards and reports using tools like Tableau/PowerBI to track key performance metrics • Perform statistical analysis and create predictive models • Design and implement data quality checks and validation processes • Collaborate with stakeholders to identify business needs and opportunities • Lead process improvement initiatives • Translate business requirements into technical specifications • Present findings and recommendations to leadership Experience in Amazon Redshift and other AWS technologies Experience using databases with a large-scale data set Experience with reporting and Data Visualization tools such as Quick Sight / Tableau / Power BI or other BI packages Experience writing business requirements documents, functional specifications, and use cases Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 2 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

We are looking for a highly skilled Senior Data Engineer to join our team. The ideal candidate will have hands-on experience working with large-scale data platforms and a strong background in Python, PySpark, AWS, and modern data warehousing tools such as Snowflake and DBT. Familiarity with NoSQL databases like MongoDB and real-time streaming platforms like Kafka is essential. Key Responsibilities Design, build, and maintain scalable data pipelines using PySpark and Python. Work with AWS cloud services (S3, Lambda, Glue, EMR, Redshift) for data ingestion, processing, and storage. Implement and maintain ETL workflows using DBT and orchestration tools (e.g., Airflow). Design and manage data models in Snowflake, ensuring performance and reliability. Work with SQL for querying and optimizing datasets across different databases. Integrate and manage data from MongoDB, Kafka, and other streaming or NoSQL sources. Collaborate with data scientists, analysts, and other engineers to support advanced analytics and ML initiatives. Ensure data quality, lineage, and governance through best practices and tools. Required Skills & Qualifications Strong programming skills in Python and PySpark. Hands-on experience with AWS data services. Proficiency in SQL and experience with DBT for data transformation. Experience with Snowflake for data warehousing. Knowledge of MongoDB, Kafka, and data streaming concepts. Good understanding of data architecture, modeling, and data governance. Experience with CI/CD and DevOps practices in a data engineering environment is a plus. Excellent problem-solving skills and the ability to work independently or as part of a team. Show more Show less

Posted 2 weeks ago

Apply

0 years

15 - 25 Lacs

Gurgaon

On-site

GlassDoor logo

Overview C5i C5i is a pure-play AI & Analytics provider that combines the power of human perspective with AI technology to deliver trustworthy intelligence. The company drives value through a comprehensive solution set, integrating multifunctional teams that have technical and business domain expertise with a robust suite of products, solutions, and accelerators tailored for various horizontal and industry-specific use cases. At the core, C5i’s focus is to deliver business impact at speed and scale by driving adoption of AI-assisted decision-making. C5i caters to some of the world’s largest enterprises, including many Fortune 500 companies. The company’s clients span Technology, Media, and Telecom (TMT), Pharma & Lifesciences, CPG, Retail, Banking, and other sectors. C5i has been recognized by leading industry analysts like Gartner and Forrester for its Analytics and AI capabilities and proprietary AI-based platforms. Global offices United States | Canada | United Kingdom | United Arab of Emirates | India Job Summary We are looking for experienced Data Modelers to support large-scale data engineering and analytics initiatives. The role involves developing logical and physical data models, working closely with business and engineering teams to define data requirements, and ensuring alignment with enterprise standards. Independently complete conceptual, logical and physical data models for any supported platform, including SQL Data Warehouse, Spark, Data Bricks Delta Lakehouse or other Cloud data warehousing technologies. Governs data design/modelling – documentation of metadata (business definitions of entities and attributes) and constructions database objects, for baseline and investment funded projects, as assigned. Develop a deep understanding of the business domains like Customer, Sales, Finance, Supplier, and enterprise technology inventory to craft a solution roadmap that achieves business objectives, maximizes reuse. Drive collaborative reviews of data model design, code, data, security features to drive data product development. Show expertise for data at all levels: low-latency, relational, and unstructured data stores; analytical and data lakes; SAP Data Model. Develop reusable data models based on cloud-centric, code-first approaches to data management and data mapping. Partner with the data stewards team for data discovery and action by business customers and stakeholders. Provides and/or supports data analysis, requirements gathering, solution development, and design reviews for enhancements to, or new, applications/reporting. Assist with data planning, sourcing, collection, profiling, and transformation. Support data lineage and mapping of source system data to canonical data stores. Create Source to Target Mappings (STTM) for ETL and BI developers. Skills needed: Expertise in data modelling tools (ER/Studio, Erwin, IDM/ARDM models, CPG / Manufacturing/Sales/Finance/Supplier/Customer domains ) Experience with at least one MPP database technology such as Databricks Lakehouse, Redshift, Synapse, Teradata, or Snowflake. Experience with version control systems like GitHub and deployment & CI tools. Experience of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Working knowledge of SAP data models, particularly in the context of HANA and S/4HANA, Retails Data like IRI, Nielsen Retail. C5i is proud to be an equal opportunity employer. We are committed to equal employment opportunity regardless of race, color, religion, sex, sexual orientation, age, marital status, disability, gender identity, etc. If you have a disability or special need that requires accommodation, please keep us informed about the same at the hiring stages for us to factor necessary accommodations. Job Type: Full-time Pay: ₹1,500,000.00 - ₹2,500,000.00 per year Work Location: In person

Posted 2 weeks ago

Apply

0 years

7 - 9 Lacs

Gurgaon

On-site

GlassDoor logo

Ready to build the future with AI? At Genpact, we don’t just keep up with technology—we set the pace. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory , our industry-first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI , our breakthrough solutions tackle companies’ most complex challenges. If you thrive in a fast-moving, innovation-driven environment, love building and deploying cutting-edge AI solutions, and want to push the boundaries of what’s possible, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions – we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . Inviting applications for the role of Assistant Manager – WD Adaptive In this role you will be responsible for Workday adaptive development. Interface with other technology teams to extract, transform, and load data from a wide variety of data sources using SQL and AWS big-data technologies. Key Responsibilities: Workday Adaptive development Prepare High Level Design and ETL design Creation and support of batch and real-time data pipelines built on AWS technologies including Glue, Redshift/Spectrum, Kinesis, EMR and Athena Able to design AWS ETL workflows and ETL mapping. Maintain large ETL workflows, Review and test ETL programs Experience in AWS Athena and Glue Pyspark, EMR, DynamoDB, Redshift, Kinesis, Lambda, Snowflake Qualifications we seek in you! Minimum Qualifications Education: Bachelor’s degree in computer science, Engineering, or a related field (or equivalent experience) Relevant years of experience in Workday Adaptive development Experience in Creation and support of batch and real-time data pipelines built on AWS technologies including Glue, Redshift/Spectrum, Kinesis, EMR and Athena Experience in preparing High Level Design and ETL design. Maintain large ETL workflows, Review and test ETL programs. Preferred Qualifications/ Skills Proficient in AWS Redshift, S3, Glue, Athena, DynamoDB Should have experience in python, java Should have performed ETL developer role in at least 3 large end to end projects Should have good experience in performance tuning of ETL programs, debugging Should have good experience in database , data warehouse concepts ,SCD1,SDC2, SQLs Why join Genpact? Lead AI-first transformation – Build and scale AI solutions that redefine industries Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career —Gain hands-on experience, world-class training, mentorship, and AI certifications to advance your skills Grow with the best – Learn from top engineers, data scientists, and AI experts in a dynamic, fast-moving workplace Committed to ethical AI – Work in an environment where governance, transparency, and security are at the core of everything we build Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the 140,000+ coders, tech shapers, and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Assistant Manager Primary Location India-Gurugram Education Level Bachelor's / Graduation / Equivalent Job Posting Jun 4, 2025, 1:28:36 AM Unposting Date Ongoing Master Skills List Operations Job Category Full Time

Posted 2 weeks ago

Apply

0 years

0 Lacs

Chennai

On-site

GlassDoor logo

Strong working knowledge of modern programming languages, ETL/Data Integration tools (preferably SnapLogic) and understanding of Cloud Concepts. SSL/TLS, SQL, REST, JDBC, JavaScript, JSON Has Strong hands-on experience in Snaplogic Design/Development. Has good working experience using various snaps for JDBC, SAP, Files, Rest, SOAP, etc. Good to have the ability to build complex mappings with JSON path expressions, and Python scripting. Good to have experience in ground plex and cloud plex integrations. Has Strong hands-on experience in Snaplogic Design/Development. Has good working experience using various snaps for JDBC, SAP, Files, Rest, SOAP, etc. Should be able to deliver the project by leading a team of the 6-8member team. Should have had experience in integration projects with heterogeneous landscapes. Good to have the ability to build complex mappings with JSON path expressions, flat files and cloud. Good to have experience in ground plex and cloud plex integrations. Experience in one or more RDBMS (Oracle, DB2, and SQL Server, PostgreSQL and RedShift) Real-time experience working in OLAP & OLTP database models (Dimensional models). About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 2 weeks ago

Apply

Exploring Redshift Jobs in India

The job market for redshift professionals in India is growing rapidly as more companies adopt cloud data warehousing solutions. Redshift, a powerful data warehouse service provided by Amazon Web Services, is in high demand due to its scalability, performance, and cost-effectiveness. Job seekers with expertise in redshift can find a plethora of opportunities in various industries across the country.

Top Hiring Locations in India

  1. Bangalore
  2. Hyderabad
  3. Mumbai
  4. Pune
  5. Chennai

Average Salary Range

The average salary range for redshift professionals in India varies based on experience and location. Entry-level positions can expect a salary in the range of INR 6-10 lakhs per annum, while experienced professionals can earn upwards of INR 20 lakhs per annum.

Career Path

In the field of redshift, a typical career path may include roles such as: - Junior Developer - Data Engineer - Senior Data Engineer - Tech Lead - Data Architect

Related Skills

Apart from expertise in redshift, proficiency in the following skills can be beneficial: - SQL - ETL Tools - Data Modeling - Cloud Computing (AWS) - Python/R Programming

Interview Questions

  • What is Amazon Redshift and how does it differ from traditional databases? (basic)
  • How does data distribution work in Amazon Redshift? (medium)
  • Explain the difference between SORTKEY and DISTKEY in Redshift. (medium)
  • How do you optimize query performance in Amazon Redshift? (advanced)
  • What is the COPY command in Redshift used for? (basic)
  • How do you handle large data sets in Redshift? (medium)
  • Explain the concept of Redshift Spectrum. (advanced)
  • What is the difference between Redshift and Redshift Spectrum? (medium)
  • How do you monitor and manage Redshift clusters? (advanced)
  • Can you describe the architecture of Amazon Redshift? (medium)
  • What are the best practices for data loading in Redshift? (medium)
  • How do you handle concurrency in Redshift? (advanced)
  • Explain the concept of vacuuming in Redshift. (basic)
  • What are Redshift's limitations and how do you work around them? (advanced)
  • How do you scale Redshift clusters for performance? (medium)
  • What are the different node types available in Amazon Redshift? (basic)
  • How do you secure data in Amazon Redshift? (medium)
  • Explain the concept of Redshift Workload Management (WLM). (advanced)
  • What are the benefits of using Redshift over traditional data warehouses? (basic)
  • How do you optimize storage in Amazon Redshift? (medium)
  • What is the difference between Redshift and Redshift Spectrum? (medium)
  • How do you troubleshoot performance issues in Amazon Redshift? (advanced)
  • Can you explain the concept of columnar storage in Redshift? (basic)
  • How do you automate tasks in Redshift? (medium)
  • What are the different types of Redshift nodes and their use cases? (basic)

Conclusion

As the demand for redshift professionals continues to rise in India, job seekers should focus on honing their skills and knowledge in this area to stay competitive in the job market. By preparing thoroughly and showcasing their expertise, candidates can secure rewarding opportunities in this fast-growing field. Good luck with your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies