Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Description As part of the Last Mile Science & Technology organization, you’ll partner closely with Product Managers, Data Scientists, and Software Engineers to drive improvements in Amazon's Last Mile delivery network. You will leverage data and analytics to generate insights that accelerate the scale, efficiency, and quality of the routes we build for our drivers through our end-to-end last mile planning systems. You will develop complex data engineering solutions using AWS technology stack (S3, Glue, IAM, Redshift, Athena). You should have deep expertise and passion in working with large data sets, building complex data processes, performance tuning, bringing data from disparate data stores and programmatically identifying patterns. You will work with business owners to develop and define key business questions and requirements. You will provide guidance and support for other engineers with industry best practices and direction. Analytical ingenuity and leadership, business acumen, effective communication capabilities, and the ability to work effectively with cross-functional teams in a fast-paced environment are critical skills for this role. Key job responsibilities Design, implement, and support data warehouse / data lake infrastructure using AWS big data stack, Python, Redshift, Quicksight, Glue/lake formation, EMR/Spark/Scala, Athena etc. Extract huge volumes of structured and unstructured data from various sources (Relational /Non-relational/No-SQL database) and message streams and construct complex analyses. Develop and manage ETLs to source data from various systems and create unified data model for analytics and reporting Perform detailed source-system analysis, source-to-target data analysis, and transformation analysis Participate in the full development cycle for ETL: design, implementation, validation, documentation, and maintenance. Basic Qualifications 3+ years of data engineering experience Experience with data modeling, warehousing and building ETL pipelines 4+ years of SQL experience Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS Experience as a data engineer or related specialty (e.g., software engineer, business intelligence engineer, data scientist) with a track record of manipulating, processing, and extracting value from large datasets Preferred Qualifications Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A2967540 Show more Show less
Posted 1 week ago
8.0 - 10.0 years
2 - 8 Lacs
Hyderābād
On-site
India - Hyderabad JOB ID: R-216601 ADDITIONAL LOCATIONS: India - Hyderabad WORK LOCATION TYPE: On Site DATE POSTED: Jun. 09, 2025 CATEGORY: Information Systems Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Senior Manager Technology – US Commercial Data & Analytics What you will do Let’s do this. Let’s change the world. In this vital role you will lead the engagement model between Amgen's Technology organization and our global business partners in Commercial Data & Analytics. We seek a technology leader with a passion for innovation and a collaborative working style that partners effectively with business and technology leaders. Are you interested in building a team that consistently delivers business value in an agile model using technologies such as AWS, Databricks, Airflow, and Tableau? Come join our team! Roles & Responsibilities: Establish an effective engagement model to collaborate with senior leaders on the Sales Insights product team within the Commercial Data & Analytics organization, focused on operations within the United States Serve as the technology product owner for an agile product team committed to delivering business value to Commercial stakeholders via data pipeline buildout for sales data Lead and mentor junior team members to deliver on the needs of the business Interact with business clients and technology management to create technology roadmaps, build cases, and drive DevOps to achieve the roadmaps Help to mature Agile operating principles through deployment of creative and consistent practices for user story development, robust testing and quality oversight, and focus on user experience Ability to connect and understand our vast array Commercial and other functional data sources including Sales, Activity, and Digital data, etc. into consumable and user-friendly modes (e.g., dashboards, reports, mobile, etc.) for key decision makers such as executives, brand leads, account managers, and field representatives. Become the lead subject matter expert in reporting technology capabilities by researching and implementing new tools and features, internal and external methodologies What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master’s degree with 8 - 10 years of experience in Information Systems experience OR Bachelor’s degree with 10 - 14 years of experience in Information Systems experience OR Diploma with 14 - 18 years of experience in Information Systems experience Must-Have Skills Excellent problem-solving skills and a passion for tackling complex challenges in data and analytics with technology Experience leading data and analytics teams in a Scaled Agile Framework (SAFe) Excellent interpersonal skills, strong attention to detail, and ability to influence based on data and business value Ability to build compelling business cases with accurate cost and effort estimations Has experience with writing user requirements and acceptance criteria in agile project management systems such as Jira Ability to explain sophisticated technical concepts to non-technical clients Strong understanding of sales and incentive compensation value streams Preferred Qualifications: Jira Align & Confluence experience Experience of DevOps, Continuous Integration, and Continuous Delivery methodology Understanding of software systems strategy, governance, and infrastructure Experience in managing product features for PI planning and developing product roadmaps and user journeys Familiarity with low-code, no-code test automation software Technical thought leadership Soft Skills: Able to work effectively across multiple geographies (primarily India, Portugal, and the United States) under minimal supervision Demonstrated proficiency in written and verbal communication in English language Skilled in providing oversight and mentoring team members. Demonstrated ability in effectively delegating work Intellectual curiosity and the ability to question partners across functions Ability to prioritize successfully based on business value High degree of initiative and self-motivation Ability to manage multiple priorities successfully across virtual teams Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills Technical Skills: ETL tools: Experience in ETL tools such as Databricks Redshift or equivalent cloud-based dB Big Data, Analytics, Reporting, Data Lake, and Data Integration technologies S3 or equivalent storage system AWS (similar cloud-based platforms) BI Tools (Tableau and Power BI preferred) What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 1 week ago
8.0 years
0 - 0 Lacs
Hyderābād
On-site
Job Title: Senior Data Engineer Location: Hyderabad, India (Hybrid Model) Experience Required: 8+ Years Contract Duration: 1 Year Work Mode: Onsite (Hybrid Model) Job Description: We are seeking a highly skilled and experienced Senior Data Engineer for a contractual opportunity (C2C) with a duration of 1 year. The ideal candidate will have a strong background in Airflow, Python, AWS, and Big Data technologies (especially Spark) and be capable of building scalable and efficient data engineering solutions in a hybrid onsite role based in Hyderabad. Mandatory Skills: Apache Airflow: Expertise in workflow orchestration, DAG creation, and managing complex data pipelines. Python: Proficient in writing clean, scalable, and efficient code for ETL and data transformation. AWS: Hands-on experience with core AWS services like S3, Lambda, Glue, Redshift, EMR, and CloudWatch. Big Data (Spark): Strong experience with Spark (PySpark or Scala), handling large datasets and distributed computing. Key Responsibilities: Design, implement, and maintain scalable data pipelines and ETL workflows using Airflow and Python. Develop and manage robust cloud-based solutions using AWS. Collaborate with cross-functional teams including data scientists, analysts, and business stakeholders. Handle large-scale structured and unstructured data processing using Spark and related technologies. Ensure high standards of data quality, security, and governance. Apply CI/CD practices for deployment and maintain monitoring/logging mechanisms. Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Minimum of 8 years of relevant experience in Data Engineering. Strong problem-solving skills and the ability to work in a fast-paced, hybrid environment. Excellent verbal and written communication skills. Preferred Skills: Experience with data cataloging, governance, and lineage tools. Exposure to Docker/Kubernetes for containerized workloads. Knowledge of alternative orchestration tools like Prefect or Luigi is a plus. Contact Us: Email: career@munificentresource.in Call/WhatsApp: +91 90643 63461 Subject Line: Senior Data Engineer (Hyd) . Job Types: Full-time, Contractual / Temporary Contract length: 12 months Pay: ₹75,000.00 - ₹85,000.00 per month Schedule: Day shift Night shift Work Location: In person
Posted 1 week ago
130.0 years
4 - 7 Lacs
Hyderābād
Remote
Job Description Manager, Data Visualization Based in Hyderabad, join a global healthcare biopharma company and be part of a 130-year legacy of success backed by ethical integrity, forward momentum, and an inspiring mission to achieve new milestones in global healthcare. Our Technology centers focus on creating a space where teams can come together to deliver business solutions that save and improve lives. An integral part of the company IT operating model, Tech centers are globally distributed locations where each IT division has employees to enable our digital transformation journey and drive business outcomes. These locations, in addition to the other sites, are essential to supporting our business and strategy. A focused group of leaders in each tech center helps to ensure we can manage and improve each location, from investing in growth, success, and well-being of our people, to making sure colleagues from each IT division feel a sense of belonging to managing critical emergencies. And together, we must leverage the strength of our team to collaborate globally to optimize connections and share best practices across the Tech Centers. Role Overview: A unique opportunity to be part of an Insight & Analytics Data hub for a leading biopharmaceutical company and define a culture that creates a compelling customer experience. Bring your entrepreneurial curiosity and learning spirit into a career of purpose, personal growth, and leadership. We are seeking those who have a passion for using data, analytics, and insights to drive decision-making that will allow us to tackle some of the world's greatest health threats As a Manager in Data Visualization, you will be focused on designing and developing compelling data visualizations solutions to enable actionable insights & facilitate intuitive information consumption for internal business stakeholders. The ideal candidate will demonstrate competency in building user-centric visuals & dashboards that empower stakeholders with data driven insights & decision-making capability. Our Quantitative Sciences team use big data to analyze the safety and efficacy claims of our potential medical breakthroughs. We review the quality and reliability of clinical studies using deep scientific knowledge, statistical analysis, and high-quality data to support decision-making in clinical trials. What will you do in this role: Design & develop user-centric data visualization solutions utilizing complex data sources. Identify & define key business metrics and KPIs in partnership with business stakeholders. Define & develop scalable data models in alignment & support from data engineering & IT teams. Lead UI UX workshops to develop user stories, wireframes & develop intuitive visualizations. Collaborate with data engineering, data science & IT teams to deliver business friendly dashboard & reporting solutions. Apply best practices in data visualization design & continuously improve upon intuitive user experience for business stakeholders. Provide thought leadership and data visualization best practices to the broader Data & Analytics organization. Identify opportunities to apply data visualization technologies to streamline & enhance manual / legacy reporting deliveries. Provide training & coaching to internal stakeholders to enable a self-service operating model. Co-create information governance & apply data privacy best practices to solutions. Continuously innovative on visualization best practices & technologies by reviewing external resources & marketplace. What Should you have: 5 years’ relevant experience in data visualization, infographics, and interactive visual storytelling Working experience and knowledge in Power BI / QLIK / Spotfire / Tableau and other data visualization technologies Working experience and knowledge in ETL process, data modeling techniques & platforms (Alteryx, Informatica, Dataiku, etc.) Experience working with Database technologies (Redshift, Oracle, Snowflake, etc) & data processing languages (SQL, Python, R, etc.) Experience in leveraging and managing third party vendors and contractors. Self-motivation, proactivity, and ability to work independently with minimum direction. Excellent interpersonal and communication skills Excellent organizational skills, with ability to navigate a complex matrix environment and organize/prioritize work efficiently and effectively. Demonstrated ability to collaborate and lead with diverse groups of work colleagues and positively manage ambiguity. Experience in Pharma and or Biotech Industry is a plus. Our technology teams operate as business partners, proposing ideas and innovative solutions that enable new organizational capabilities. We collaborate internationally to deliver services and solutions that help everyone be more productive and enable innovation. Who we are: We are known as Merck & Co., Inc., Rahway, New Jersey, USA in the United States and Canada and MSD everywhere else. For more than a century, we have been inventing for life, bringing forward medicines and vaccines for many of the world's most challenging diseases. Today, our company continues to be at the forefront of research to deliver innovative health solutions and advance the prevention and treatment of diseases that threaten people and animals around the world. What we look for: Imagine getting up in the morning for a job as important as helping to save and improve lives around the world. Here, you have that opportunity. You can put your empathy, creativity, digital mastery, or scientific genius to work in collaboration with a diverse group of colleagues who pursue and bring hope to countless people who are battling some of the most challenging diseases of our time. Our team is constantly evolving, so if you are among the intellectually curious, join us—and start making your impact today. #HYDIT2025 Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status: Regular Relocation: VISA Sponsorship: Travel Requirements: Flexible Work Arrangements: Remote Shift: Valid Driving License: Hazardous Material(s): Required Skills: Business Intelligence (BI), Database Administration, Data Engineering, Data Management, Data Modeling, Data Visualization, Design Applications, Information Management, Software Development, Software Development Life Cycle (SDLC), System Designs Preferred Skills: Job Posting End Date: 07/9/2025 A job posting is effective until 11:59:59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID: R329043
Posted 1 week ago
2.0 years
6 - 8 Lacs
Hyderābād
On-site
- 2+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience - Experience with data visualization using Tableau, Quicksight, or similar tools - Experience with one or more industry analytics visualization tools (e.g. Excel, Tableau, QuickSight, MicroStrategy, PowerBI) and statistical methods (e.g. t-test, Chi-squared) - Experience with scripting language (e.g., Python, Java, or R) Transportation Financial Systems (TFS) owns the technology components that perform the financial activities for transportation business. These systems are used across all transportation programs and retail expansion to new geographies. TFS systems provide financial document creation & management, expense auditing, accounting, payments and cost allocation functions. Our new generation products are highly scalable and operate at finer level granularity to reconcile every dollar in transportation financial accounts with zero manual entries or corrections. The goal is to develop global product suite for all freight modes touching every single package movement across Amazon. Our mission is to abstract logistics complexities from financial world and financial complexities from logistics world. We are looking for an innovative, hands-on and customer-obsessed candidate for this role. Candidate must be detail oriented, have superior verbal and written communication skills, and should be able to juggle multiple tasks at once. The candidate must be able to make sound judgments and get the right things done. We seek a Business Intelligence (BI) Engineer to strengthen our data-driven decision-making processes. This role requires an individual with excellent statistical and analytical abilities, deep knowledge of business intelligence solutions and have the ability to strongly utilize the GenAI technologies to analyse and solving problem, able to collaborate with product, business & tech teams. The successful candidate will demonstrate the ability to work independently and learn quickly, quick comprehension of Transportation Finance system functions and have passion for data and analytics, be a self-starter comfortable with ambiguity, an ability to work in a fast-paced and entrepreneurial environment, and driven by a desire to innovate Amazon’s approach to this space. Key job responsibilities 1) Translate business problems into analytical requirements and define expected output 2) Develop and implement key performance indicators (KPIs) to measure business performance and product impact. Responsible for deep-dive analysis on key metrics. 3) Create & execute analytical approach to solve the problem inline with stakeholder expectation 4) Strongly leveraging GenAI technologies to solve problems and building solutions 5) Be the domain expert and have knowledge of data availability from various sources. 6) Execute solution with scalable development practices in scripting, write & optimize SQL queries, reporting, data extraction and data visualization. 7) Proactively and independently work with stakeholders to construct use cases and associated standardized outputs for your work 8) Actively manage the timeline and deliverables of projects, focusing on interactions in the team About the team Transportation Financial Systems (TFS) owns the technology components that perform the financial activities for transportation business. These systems are used across all transportation programs and retail expansion to new geographies. TFS systems provide financial document creation & management, expense auditing, accounting, payments and cost allocation functions. Our new generation products are highly scalable and operate at finer level granularity to reconcile every dollar in transportation financial accounts with zero manual entries or corrections. The goal is to develop global product suite for all freight modes touching every single package movement across Amazon. Our mission is to abstract logistics complexities from financial world and financial complexities from logistics world. Master's degree, or Advanced technical degree Knowledge of data modeling and data pipeline design Experience with statistical analysis, co-relation analysis Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
Delhi
On-site
Bangalore/ Delhi Data / Full Time / Hybrid What is Findem: Findem is the only talent data platform that combines 3D data with AI. It automates and consolidates top-of-funnel activities across your entire talent ecosystem, bringing together sourcing, CRM, and analytics into one place. Only 3D data connects people and company data over time - making an individual’s entire career instantly accessible in a single click, removing the guesswork, and unlocking insights about the market and your competition no one else can. Powered by 3D data, Findem’s automated workflows across the talent lifecycle are the ultimate competitive advantage. Enabling talent teams to deliver continuous pipelines of top, diverse candidates while creating better talent experiences, Findem transforms the way companies plan, hire, and manage talent. Learn more at www.findem.ai Experience - 5 - 9 years We are looking for an experienced Big Data Engineer, who will be responsible for building, deploying and managing various data pipelines, data lake and Big data processing solutions using Big data and ETL technologies. Location- Delhi Hybrid- 3 days onsite We are looking for an experienced Big Data Engineer, who will be responsible for building, deploying and managing various data pipelines, data lake and Big data processing solutions using Big data and ETL technologies. RESPONSIBILITIES Build data pipelines, Big data processing solutions and data lake infrastructure using various Big data and ETL technologies Assemble and process large, complex data sets that meet functional non-functional business requirements ETL from a wide variety of sources like MongoDB, S3, Server-to-Server, Kafka etc., and processing using SQL and big data technologies Build analytical tools to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics Build interactive and ad-hoc query self-serve tools for analytics use cases Build data models and data schema for performance, scalability and functional requirement perspective Build processes supporting data transformation, metadata, dependency and workflow management Research, experiment and prototype new tools/technologies and make them successful SKILL REQUIREMENTS Must have-Strong in Python/Scala Must have experience in Big data technologies like Spark, Hadoop, Athena / Presto, Redshift, Kafka etc Experience in various file formats like parquet, JSON, Avro, orc etc Experience in workflow management tools like airflow Experience with batch processing, streaming and message queues Any of visualization tools like Redash, Tableau, Kibana etc Experience in working with structured and unstructured data sets Strong problem solving skills Good to have - Exposure to NoSQL like MongoDB Exposure to Cloud platforms like AWS, GCP, etc Exposure to Microservices architecture Exposure to Machine learning techniques The role is full-time and comes with full benefits. We are globally headquartered in the San Francisco Bay Area with our India headquarters in Bengaluru. Equal Opportunity As an equal opportunity employer, we do not discriminate on the basis of race, color, religion, national origin, age, sex (including pregnancy), physical or mental disability, medical condition, genetic information, gender identity or expression, sexual orientation, marital status, protected veteran status or any other legally-protected characteristic.
Posted 1 week ago
2.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Description When you attract people who have the DNA of pioneers and the DNA of explorers, you build a company of like-minded people who want to invent. And that’s what they think about when they get up in the morning: how are we going to work backwards from customers and build a great service or a great product” – Jeff Bezos Amazon.com’s success is built on a foundation of customer obsession. Have you ever thought about what it takes to successfully deliver millions of packages to Amazon customers seamlessly every day like a clock work? In order to make that happen, behind those millions of packages, billions of decision gets made by machines and humans. What is the accuracy of customer provided address? Do we know exact location of the address on Map? Is there a safe place? Can we make unattended delivery? Would signature be required? If the address is commercial property? Do we know open business hours of the address? What if customer is not home? Is there an alternate delivery address? Does customer have any special preference? What are other addresses that also have packages to be delivered on the same day? Are we optimizing delivery associate’s route? Does delivery associate know locality well enough? Is there an access code to get inside building? And the list simply goes on. At the core of all of it lies quality of underlying data that can help make those decisions in time. The person in this role will be a strong influencer who will ensure goal alignment with Technology, Operations, and Finance teams. This role will serve as the face of the organization to global stakeholders. This position requires a results-oriented, high-energy, dynamic individual with both stamina and mental quickness to be able to work and thrive in a fast-paced, high-growth global organization. Excellent communication skills and executive presence to get in front of VPs and SVPs across Amazon will be imperative. Key Strategic Objectives: Amazon is seeking an experienced leader to own the vision for quality improvement through global address management programs. As a Business Intelligence Engineer of Amazon last mile quality team, you will be responsible for shaping the strategy and direction of customer-facing products that are core to the customer experience. As a key member of the last mile leadership team, you will continually raise the bar on both quality and performance. You will bring innovation, a strategic perspective, a passionate voice, and an ability to prioritize and execute on a fast-moving set of priorities, competitive pressures, and operational initiatives. You will partner closely with product and technology teams to define and build innovative and delightful experiences for customers. You must be highly analytical, able to work extremely effectively in a matrix organization, and have the ability to break complex problems down into steps that drive product development at Amazon speed. You will set the tempo for defect reduction through continuous improvement and drive accountability across multiple business units in order to deliver large scale high visibility/ high impact projects. You will lead by example to be just as passionate about operational performance and predictability as you will be about all other aspects of customer experience. The Successful Candidate Will Be Able To Effectively manage customer expectations and resolve conflicts that balance client and company needs. Develop process to effectively maintain and disseminate project information to stakeholders. Be successful in a delivery focused environment and determining the right processes to make the team successful. This opportunity requires excellent technical, problem solving, and communication skills. The candidate is not just a policy maker/spokesperson but drives to get things done. Possess superior analytical abilities and judgment. Use quantitative and qualitative data to prioritize and influence, show creativity, experimentation and innovation, and drive projects with urgency in this fast-paced environment. Partner with key stakeholders to develop the vision and strategy for customer experience on our platforms. Influence product roadmaps based on this strategy along with your teams. Support the scalable growth of the company by developing and enabling the success of the Operations leadership team. Serve as a role model for Amazon Leadership Principles inside and outside the organization Actively seek to implement and distribute best practices across the operation Basic Qualifications 2+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with scripting language (e.g., Python, Java, or R) Experience building and maintaining basic data artifacts (e.g., ETL, data models, queries) Experience applying basic statistical methods (e.g. regression) to difficult business problems Experience gathering business requirements, using industry standard business intelligence tool(s) to extract data, formulate metrics and build reports Track record of generating key business insights and collaborating with stakeholders Preferred Qualifications Knowledge of how to improve code quality and optimizes BI processes (e.g. speed, cost, reliability) Knowledge of data modeling and data pipeline design Experience in designing and implementing custom reporting systems using automation tools Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A2985438 Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Description At Amazon, our goal is to be earth’s most customer-centric company and to create a safe environment for both our customers and our associates. To achieve that, we need exceptionally talented, bright, dynamic, and driven people. If you'd like to help us build the place to find and buy anything online, this is your chance to make history. We are looking for a talented business analyst to join the Regulatory Intelligence, Safety & Compliance Global Data & Analytics (RISC GDA) team. This role will be a key member of the Science and Analytics team, responsible for driving analysis and insights to help make meaningful business decisions. As a Business Analyst, you will focus on improving the success within business functions by analyzing data, discovering and solving real world problems, building metrics and business cases to improve customer experience, and providing timely data support for escalations to mitigate risk. We are focused on your success and want to build future leaders within Amazon. A key component of the role is to identify process and system improvement opportunities by developing the right metrics, analyzing data, and partnering with internal teams. In addition, you will design and develop automated reporting solutions to enable stakeholders to manage the business and make effective decisions. Lastly, you will enable effective decision making by retrieving and aggregating data from multiple sources and compiling it into a digestible and actionable format. This role requires an individual with excellent statistical and analytical abilities, deep knowledge of business intelligence solutions and good business acumen. The successful candidate will be a self-starter comfortable with ambiguity, with strong attention to detail, an ability to work in a fast-paced and ever-changing environment, and driven by a desire to innovate in this space. They have experience in working directly with large data sets, and will be required to make important decisions on defining, building, and scaling data processes and reports through directly with stakeholders and other data and tech professionals. Key job responsibilities Creating automated reports and dashboards with a combination of BI tools such as Quicksight Partnering with program team, Legal, and Tech stakeholders to understand challenges and provide /analysis to help drive program success Providing data support for business and cross-functional partners to address escalations and answer questions using PostgreSQL and AWS solutions Uncovering trends and correlations through mining and analysis to develop insights that can help stakeholders to make effective decisions Designing and executing analytical projects using statistical analysis Create mechanisms for non-technical stakeholders to self-serve data including during urgent issue management Look for opportunities to simplify and automate redundant processes Basic Qualifications Bachelor's degree or equivalent Experience defining requirements and using data and metrics to draw business insights Experience making business recommendations and influencing stakeholders Experience with SQL Experience with data visualization using Tableau, Quicksight, or similar tools Preferred Qualifications 3+ years of business analyst, data analyst or similar role experience Experience in Excel (including VBA, pivot tables, array functions, power pivots, etc.) and data visualization tools such as Tableau Familiarity with AWS solutions such as EC2, Dynamo DB, S3, Redshift, and RDS. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Karnataka Job ID: A2958506 Show more Show less
Posted 1 week ago
0.0 - 4.0 years
0 Lacs
Chennai, Tamil Nadu
Remote
Senior ETL Developer (Talend + PostgreSQL) – Immediate Joiner Preferred Experience: 5-8 years Project : Us client based project Remote or hybrid We are looking for an experienced and proactive ETL Developer with 5–8 years of hands-on experience in Talend and PostgreSQL , who can contribute individually and guide a team in managing and optimizing data workflows. The ideal candidate will also support and collaborate with peers using a tool referred to as Quilt or Quilt Talend . Key Responsibilities: ETL Development : Design, build, and maintain scalable ETL pipelines using Talend. Data Integration : Seamlessly integrate structured and unstructured data from diverse sources. PostgreSQL Expertise : Strong experience in PostgreSQL for data warehousing, performance tuning, indexing, and large dataset operations. Team Guidance : Act as a technical lead to guide junior developers and ensure best practices in ETL processes. Tool Expertise (Quilt Talend or Quilt Tool) : Support team members in using Quilt —a platform used to manage and version data for ML and analytics pipelines. Linux & Scripting : Write automation scripts in Linux for batch processing and monitoring. AWS Cloud Integration : Experience integrating Talend with AWS services such as S3, RDS (PostgreSQL), Glue, or Redshift. Troubleshooting : Proactively identify bottlenecks or issues in ETL jobs and ensure data accuracy and uptime. Collaboration : Work closely with data analysts, scientists, and stakeholders to deliver end-to-end solutions. Must-Have Skills: Strong knowledge of Talend (Open Studio / Data Integration / Big Data Edition) . 3+ years of hands-on experience with PostgreSQL . Familiarity with Quilt Data Tool (https://quiltdata.com/) or similar data versioning tools. Solid understanding of cloud ETL environments, especially AWS . Strong communication and leadership skills. Nice-to-Have: Familiarity with Oracle for legacy systems. Knowledge of data governance and security best practices. Experience integrating Talend with APIs or external services. Additional Info: Location : [Chennai, Madurai/Tamil Nadu / Remote / Hybrid] Joining : Immediate joiners preferred Job Type : Full-time / Contract Job Types: Full-time, Contractual / Temporary Pay: ₹700,000.00 - ₹1,200,000.00 per year Benefits: Work from home Schedule: Evening shift Monday to Friday Rotational shift US shift Weekend availability Application Question(s): Are you willing to work as hybrid from Chennai or Madurai Experience: EDL : 4 years (Required) Location: Chennai, Tamil Nadu (Preferred) Shift availability: Night Shift (Required) Overnight Shift (Required) Work Location: In person
Posted 1 week ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About The Role Grade Level (for internal use): 10 The Team As a member of the Data Transformation team you will work on building ML powered products and capabilities to power natural language understanding, data extraction, information retrieval and data sourcing solutions for S&P Global Market Intelligence and our clients. You will spearhead development of production-ready AI products and pipelines while leading-by-example in a highly engaging work environment. You will work in a (truly) global team and encouraged for thoughtful risk-taking and self-initiative. The Impact The Data Transformation team has already delivered breakthrough products and significant business value over the last 3 years. In this role you will be developing our next generation of new products while enhancing existing ones aiming at solving high-impact business problems. What’s In It For You Be a part of a global company and build solutions at enterprise scale Collaborate with a highly skilled and technically strong team Contribute to solving high complexity, high impact problems Key Responsibilities Build production ready data acquisition and transformation pipelines from ideation to deployment Being a hands-on problem solver and developer helping to extend and manage the data platforms Apply best practices in data modeling and building ETL pipelines (streaming and batch) using cloud-native solutions What We’re Looking For 3-5 years of professional software work experience Expertise in Python and Apache Spark OOP Design patterns, Test-Driven Development and Enterprise System design Experience building data processing workflows and APIs using frameworks such as FastAPI, Flask etc. Proficiency in API integration, experience working with REST APIs and integrating external & internal data sources SQL (any variant, bonus if this is a big data variant) Linux OS (e.g. bash toolset and other utilities) Version control system experience with Git, GitHub, or Azure DevOps. Problem-solving and debugging skills Software craftsmanship, adherence to Agile principles and taking pride in writing good code Techniques to communicate change to non-technical people Nice to have Core Java 17+, preferably Java 21+, and associated toolchain DevOps with a keen interest in automation Apache Avro Apache Kafka Kubernetes Cloud expertise (AWS and GCP preferably) Other JVM based languages - e.g. Kotlin, Scala C# - in particular .NET Core Data warehouses (e.g., Redshift, Snowflake, BigQuery) What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 315685 Posted On: 2025-05-20 Location: Gurgaon, Haryana, India Show more Show less
Posted 1 week ago
9.0 years
6 - 7 Lacs
Chennai
On-site
Total 9 years of experience with minimum 5 years of experience working as DBT administrator DBT Core Cloud Manage DBT projects models tests snapshots and deployments in both DBT Core and DBT Cloud Administer and manage DBT Cloud environments including users permissions job scheduling and Git integration Onboarding and enablement of DBT users on Dbt Cloud platform Work closely with users to support DBT adoption and usage SQL Warehousing Write optimized SQL and work with data warehouses like Snowflake BigQuery Redshift or Databricks Cloud Platforms Use AWS GCP or Azure for data storage eg S3 GCS compute and resource management Orchestration Tools Automate DBT runs using Airflow Prefect or DBT Cloud job scheduling Version Control CI CD Integrate DBT with Git and manage CI CD pipelines for model promotion and testing Monitoring Logging Track job performance and errors using tools like dbt-artifacts, Datadog, or cloud-native logging Access Security Configure IAM roles secrets and permissions for secure DBT and data warehouse access Documentation Collaboration Maintain model documentation use dbt docs and collaborate with data teams About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 1 week ago
5.0 - 10.0 years
0 Lacs
Noida
On-site
Noida, Uttar Pradesh, India;Indore, Madhya Pradesh, India;Bangalore, Karnataka, India;Hyderabad, Telangana, India;Gurgaon, Haryana, India Qualification : Required Proven hands-on experience on designing, developing and supporting Database projects for analysis in a demanding environment. Proficient in database design techniques – relational and dimension designs Experience and a strong understanding of business analysis techniques used. High proficiency in the use of SQL or MDX queries. Ability to manage multiple maintenance, enhancement and project related tasks. Ability to work independently on multiple assignments and to work collaboratively within a team is required. Strong communication skills with both internal team members and external business stakeholders Added Advanatage Hadoop ecosystem or AWS, Azure or GCP Cluster and processing Experience working on Hive or Spark SQL or Redshift or Snowflake will be an added advantage. Experience of working on Linux system Experience of Tableau or Micro strategy or Power BI or any BI tools will be an added advantage. Expertise of programming in Python, Java or Shell Script would be a plus Role : Roles & Responsibilities Be frontend person of the world’s most scalable OLAP product company – Kyvos Insights. Interact with senior-most technical and business people of large enterprises to understand their big data strategy and their problem statements in that area. Create, present, align customers with and implement solutions around Kyvos products for the most challenging enterprise BI/DW problems. Be the Go-To person for customers regarding technical issues during the project. Be instrumental in reading the pulse of the big data market and defining the roadmap of the product. Lead a few small but highly efficient teams of Big data engineers Efficient task status reporting to stakeholders and customer. Good verbal & written communication skills Be willing to work on off hours to meet timeline. Be willing to travel or relocate as per project requirement Experience : 5 to 10 years Job Reference Number : 11078
Posted 1 week ago
4.0 - 6.0 years
6 - 7 Lacs
Noida
On-site
Noida, Uttar Pradesh, India;Gurgaon, Haryana, India;Hyderabad, Telangana, India;Pune, Maharashtra, India;Indore, Madhya Pradesh, India;Bangalore, Karnataka, India Qualification : Description Strong hands-on experience in Python Having good experience on Spark/Spark Structure Streaming. Experience of working on MSK (Kafka) Kinesis. Ability to design, build and unit test applications on Spark framework on Python. Exposure to AWS cloud services such as Glue/EMR, RDS, SNS, SQS, Lambda, Redshift etc. Good experience of writing SQL queries Strong technical development experience in effectively writing code, code reviews, and best practices Ability to solve complex data-driven scenarios and triage towards defects and production issues Ability to learn-unlearn-relearn concepts with an open and analytical mindset Skills Required : Pyspark, SQL Role : Work closely with business and product management teams to develop and implement analytics solutions. Collaborate with engineers & architects to implement and deploy scalable solutions. Actively drive a culture of knowledge-building and sharing within the team Able to quickly adapt and learn Able to jump into an ambiguous situation and take the lead on resolution Good To Have: Experience of working on MSK (Kafka), Amazon Elastic Kubernetes Service and Docker Exposure on GitHub Actions, Argo CD, Argo Workflows Experience of working on Databricks Experience : 4 to 6 years Job Reference Number : 12555
Posted 1 week ago
5.0 - 7.0 years
7 - 9 Lacs
Noida
On-site
Noida, Uttar Pradesh, India;Gurgaon, Haryana, India;Hyderabad, Telangana, India;Indore, Madhya Pradesh, India;Bangalore, Karnataka, India Qualification : 5-7 years of good hands on exposure with Big Data technologies – pySpark (Data frame and SparkSQL), Hadoop, and Hive Good hands on experience of python and Bash Scripts Good understanding of SQL and data warehouse concepts Strong analytical, problem-solving, data analysis and research skills Demonstrable ability to think outside of the box and not be dependent on readily available tools Excellent communication, presentation and interpersonal skills are a must Good to have: Hands-on experience with using Cloud Platform provided Big Data technologies (i.e. IAM, Glue, EMR, RedShift, S3, Kinesis) Orchestration with Airflow and Any job scheduler experience Experience in migrating workload from on-premise to cloud and cloud to cloud migrations Skills Required : Python, Pyspark, AWS Role : Develop efficient ETL pipelines as per business requirements, following the development standards and best practices. Perform integration testing of different created pipeline in AWS env. Provide estimates for development, testing & deployments on different env. Participate in code peer reviews to ensure our applications comply with best practices. Create cost effective AWS pipeline with required AWS services i.e S3,IAM, Glue, EMR, Redshift etc. Experience : 8 to 10 years Job Reference Number : 13025
Posted 1 week ago
14.0 years
4 - 8 Lacs
Noida
On-site
Noida, Uttar Pradesh, India;Indore, Madhya Pradesh, India Qualification : We are seeking a highly experienced and dynamic Technical Project Manager to lead and manage our service engagements. The candidate will possess a strong technical ground, exceptional project management skills, and a proven track record of successfully delivering large-scale IT projects. You will be responsible for leading cross-functional teams, managing client relationships, and ensuring projects are delivered on time, within budget, and to the highest quality standards. 14+ years of experience in the role of managing and implementation of high-end software products, combined with technical knowledge in Business Intelligence (BI) and Data Engineering domains 5+ years of exeperience in project management with strong leadership and team management skills Hands-on with project management tools (e.g., Jira, Rally, MS Project) and strong expertise in Agile methodologies (certifications such as SAFe, CSM, PMP or PMI-ACP is a plus) Well versed with tracking project performance using appropriate metrics, tools and processes to successfully meet short/long term goals Rich experience interacting with clients, translating business needs into technical requirements, and delivering customer-focused solutions Exceptional verbal and written communication skills, with the ability to present complex concepts to techincal / non-technical stakeholders alike Strong understanding of BI concepts (reporting, analytics, data warehousing, ETL) leveraging expertise in tools such as Tableau, Power BI, Looker, etc. Knowledge of data modeling, database design, and data governance principles Proficiency in Data Engineering technologies (e.g., SQL, Python, cloud-based data solutions/platforms like AWS Redshift, Google BigQuery, Azure Synapse, Snowflake, Databricks) is a plus Skills Required : SAP BO, MicroStrategy, OBIEE Tableau, Power BI Role : This is a multi-dimensional and multi-functional role. You will need to be comfortable reporting program status to executives, as well as diving deep into technical discussions with internal engineering teams and external partners. Act as the primary point of contact for stakeholders and customers, gathering requirements, managing expectations, and delivering regular updates on project progress Manage and mentor cross-functional teams, fostering collaboration and ensuring high performance while meeting project milestones Drive Agile practices (e.g., Scrum, Kanban) to ensure iterative delivery, adaptability, and continuous improvement throughout the project lifecycle Identify, assess, and mitigate project risks, ensuring timely resolution of issues and adherence to quality standards. Maintain comprehensive project documentation, including status reports, roadmaps, and post-mortem analyses, to ensure transparency and accountability Define the project and delivery plan including defining scope, timelines, budgets, and deliverables for each assignment Capable of doing resource allocations as per the requirements for each assignment Experience : 14 to 18 years Job Reference Number : 12929
Posted 1 week ago
8.0 - 12.0 years
6 - 7 Lacs
Noida
On-site
Noida, Uttar Pradesh, India;Bangalore, Karnataka, India;Gurugram, Haryana, India;Hyderabad, Telangana, India;Indore, Madhya Pradesh, India;Pune, Maharashtra, India Qualification : Do you love to work on bleeding-edge Big Data technologies, do you want to work with the best minds in the industry, and create high-performance scalable solutions? Do you want to be part of the team that is solutioning next-gen data platforms? Then this is the place for you. You want to architect and deliver solutions involving data engineering on a Petabyte scale of data, that solve complex business problems Impetus is looking for a Big Data Developer that loves solving complex problems, and architects and delivering scalable solutions across a full spectrum of technologies. Experience in providing technical leadership in the Big Data space (Hadoop Stack like Spark, M/R, HDFS, Hive, etc. Should be able to communicate with the customer in the functional and technical aspects Expert-level proficiency in Python/Pyspark Hands-on experience with Shell/Bash Scripting (creating, and modifying scripting files) Control-M, AutoSys, Any job scheduler experience Experience in visualizing and evangelizing next-generation infrastructure in Big Data space (Batch, Near Real-time, Real-time technologies). Should be able to guide the team for any functional and technical issues Strong technical development experience in effectively writing code, code reviews, and best practices code refactoring. Passionate for continuous learning, experimenting, ing and contributing towards cutting-edge open-source technologies and software paradigms Good communication, problem-solving & interpersonal skills. Self-starter & resourceful personality with the ability to manage pressure situations. Capable of providing the design and Architecture for typical business problems. Exposure and awareness of complete PDLC/SDLC. Out of box thinker and not just limited to the work done in the projects. Must Have Experience with AWS(EMR, Glue, S3, RDS, Redshift, Glue) Cloud Certification Skills Required : AWS, Pyspark, Spark Role : valuate and recommend the Big Data technology stack best suited for customer needs. Design/ Architect/ Implement various solutions arising out of high concurrency systems Responsible for timely and quality deliveries Anticipate on technological evolutions Ensure the technical directions and choices. Develop efficient ETL pipelines through spark or Hive. Drive significant technology initiatives end to end and across multiple layers of architecture Provides strong technical leadership in adopting and contributing to open-source technologies related to Big Data across multiple engagements Designing /architecting complex, highly available, distributed, failsafe compute systems dealing with a considerable amount (GB/TB) of data Identify and work on incorporating Non-functional requirements into the solution (Performance, scalability, monitoring etc.) Experience : 8 to 12 years Job Reference Number : 12400
Posted 1 week ago
3.0 - 6.0 years
6 - 10 Lacs
Noida
On-site
Noida/ Indore/ Bangalore;Bangalore, Karnataka, India;Indore, Madhya Pradesh, India;Gurugram, Haryana, India Qualification : OLAP, Data Engineering, Data warehousing, ETL Hadoop ecosystem or AWS, Azure or GCP Cluster and processing Experience working on Hive or Spark SQL or Redshift or Snowflake Experience in writing and troubleshooting SQL programming or MDX queries Experience of working on Linux Experience in Microsoft Analysis services (SSAS) or OLAP tools Tableau or Micro strategy or any BI tools Expertise of programming in Python, Java or Shell Script would be a plus Skills Required : OLPA, MDX, SQL Role : Be frontend person of the world’s most scalable OLAP product company – Kyvos Insights. Interact with senior-most technical and business people of large enterprises to understand their big data strategy and their problem statements in that area. Create, present, align customers with and implement solutions around Kyvos products for the most challenging enterprise BI/DW problems. Be the Go-To person for prospects regarding technical issues during POV stage. Be instrumental in reading the pulse of the big data market and defining the roadmap of the product. Lead a few small but highly efficient teams of Big data engineers Efficient task status reporting to stakeholders and customer. Good verbal & written communication skills Be willing to work on off hours to meet timeline. Be willing to travel or relocate as per project requirement Experience : 3 to 6 years Job Reference Number : 10350
Posted 1 week ago
6.0 - 8.0 years
6 - 7 Lacs
Noida
On-site
Noida, Uttar Pradesh, India;Gurgaon, Haryana, India;Hyderabad, Telangana, India;Bangalore, Karnataka, India;Indore, Madhya Pradesh, India Qualification : 6-8 years of good hands on exposure with Big Data technologies – pySpark (Data frame and SparkSQL), Hadoop, and Hive Good hands on experience of python and Bash Scripts Good understanding of SQL and data warehouse concepts Strong analytical, problem-solving, data analysis and research skills Demonstrable ability to think outside of the box and not be dependent on readily available tools Excellent communication, presentation and interpersonal skills are a must Hands-on experience with using Cloud Platform provided Big Data technologies (i.e. IAM, Glue, EMR, RedShift, S3, Kinesis) Orchestration with Airflow and Any job scheduler experience Experience in migrating workload from on-premise to cloud and cloud to cloud migrations Good to have: Skills Required : Python, pyspark, SQL Role : Develop efficient ETL pipelines as per business requirements, following the development standards and best practices. Perform integration testing of different created pipeline in AWS env. Provide estimates for development, testing & deployments on different env. Participate in code peer reviews to ensure our applications comply with best practices. Create cost effective AWS pipeline with required AWS services i.e S3,IAM, Glue, EMR, Redshift etc. Experience : 6 to 8 years Job Reference Number : 13024
Posted 1 week ago
12.0 years
5 - 6 Lacs
Indore
On-site
Indore, Madhya Pradesh, India Qualification : BTech degree in computer science, engineering or related field of study or 12+ years of related work experience 7+ years design & implementation experience with large scale data centric distributed applications Professional experience architecting, operating cloud-based solutions with good understanding of core disciplines like compute, networking, storage, security, databases etc. Good understanding of data engineering concepts like storage, governance, cataloging, data quality, data modeling etc. Good understanding about various architecture patterns like data lake, data lake house, data mesh etc. Good understanding of Data Warehousing concepts, hands-on experience working with tools like Hive, Redshift, Snowflake, Teradata etc. Experience migrating or transforming legacy customer solutions to the cloud. Experience working with services like AWS EMR, Glue, DMS, Kinesis, RDS, Redshift, Dynamo DB, Document DB, SNS, SQS, Lambda, EKS, Data Zone etc. Thorough understanding of Big Data ecosystem technologies like Hadoop, Spark, Hive, HBase etc. and other competent tools and technologies Understanding in designing analytical solutions leveraging AWS cognitive services like Textract, Comprehend, Rekognition etc. in combination with Sagemaker is good to have. Experience working with modern development workflows, such as git, continuous integration/continuous deployment pipelines, static code analysis tooling, infrastructure-as-code, and more. Experience with a programming or scripting language – Python/Java/Scala AWS Professional/Specialty certification or relevant cloud expertise Skills Required : AWS, Big Data, Spark, Technical Architecture Role : Drive innovation within Data Engineering domain by designing reusable and reliable accelerators, blueprints, and libraries. Capable of leading a technology team, inculcating innovative mindset and enable fast paced deliveries. Able to adapt to new technologies, learn quickly, and manage high ambiguity. Ability to work with business stakeholders, attend/drive various architectural, design and status calls with multiple stakeholders. Exhibit good presentation skills with a high degree of comfort speaking with executives, IT Management, and developers. Drive technology/software sales or pre-sales consulting discussions Ensure end-to-end ownership of all tasks being aligned. Ensure high quality software development with complete documentation and traceability. Fulfil organizational responsibilities (sharing knowledge & experience with other teams / groups) Conduct technical training(s)/session(s), write whitepapers/ case studies / blogs etc. Experience : 10 to 18 years Job Reference Number : 12895
Posted 1 week ago
2.0 - 3.0 years
0 Lacs
Noida, Uttar Pradesh, India
Remote
Job Description – Digital Transformation and Automation Lead About the Role - Drive the digital backbone of a growing commercial real-estate group. - You’ll prototype, test and ship automations that save our teams > 10 hours/week in the first 90 days Total Experience - 2-3 years Availability ~40 hrs/week, 4 days on-site, 1 day remote Core Responsibilities 1. Systems Audit & Consolidation – unify Google Workspace tenants, rationalise shared drives. 2. Database & CRM Build-out – design, deploy, and maintain occupant tracker and a lightweight CRM; migrate legacy data. 3. Automation & Integration – link CRM, Google Sheets, and Tally using Apps Script/Zoho Flow/Zapier. 4. Process Documentation – own the internal wiki; keep SOPs and RACI charts current. 5. Dashboards & Reporting – craft Looker Studio boards for collections, projects, facility KPIs. 6. User Training & Support – deliver monthly clinics; teach teams how to use G Suite, ChatGPT to improve productivity 7. Security & Compliance – enforce 2FA, backup policies, basic network hygiene. 8. Vendor Co-ordination – liaise with Zoho, Tally consultants, ISP/MSP vendors; manage small capex items. Required Skills & Experience Domain Skill Level Workspace & Security ★ LAN/Wi-Fi basics & device hardening Core Automation & Low-Code ★ Apps Script or Zoho Creator/Flow; REST APIs & webhooks Core ★ Workflow bridges (Zapier / Make / n8n) Core • Cursor, Loveable, or similar AI-driven low-code tools Bonus Data Extraction & Integrations ★ Document AI / OCR stack for PDF leases (Google DocAI, Textract, etc.) Core ★ Tally Prime ODBC/API Core CRM & Customer-360 ★ End-to-end rollout of a CRM (Zoho/Freshsales) (migration, custom modules) Core • Help-desk tooling (Zoho Desk, Freshdesk) Bonus Analytics & Reporting ★ Advanced Google Sheets (ARRAYFORMULA, QUERY, IMPORTRANGE) and Looker Studio dashboards Core • Data-warehouse concepts (BigQuery/Redshift) for unified customer view Bonus Programming & Scripting ★ Python or Node.js for lightweight cloud functions / ETL Core ★ Prompt-engineering & Gen-AI APIs (OpenAI, Claude) for copilots Core Project & Knowledge Management • Trello (or equivalent Kanban) Bonus ★Notion / Google Sites for wiki & SOPs Core Soft Skills ★ Clear documentation & bilingual (English/Hindi) training; stakeholder comms Core Compensation - 40 – 50 k p.m Show more Show less
Posted 1 week ago
9.0 years
0 Lacs
Andhra Pradesh
On-site
Data Engineer Must have 9+ years of experience in below mentioned skills. Must Have: Big Data Concepts Python(Core Python- Able to write code), SQL, Shell Scripting, AWS S3 Good to Have: Event-driven/AWA SQS, Microservices, API Development,Kafka, Kubernetes, Argo, Amazon Redshift, Amazon Aurora About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 1 week ago
2.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Description When you attract people who have the DNA of pioneers and the DNA of explorers, you build a company of like-minded people who want to invent. And that’s what they think about when they get up in the morning: how are we going to work backwards from customers and build a great service or a great product” – Jeff Bezos Amazon.com’s success is built on a foundation of customer obsession. Have you ever thought about what it takes to successfully deliver millions of packages to Amazon customers seamlessly every day like a clock work? In order to make that happen, behind those millions of packages, billions of decision gets made by machines and humans. What is the accuracy of customer provided address? Do we know exact location of the address on Map? Is there a safe place? Can we make unattended delivery? Would signature be required? If the address is commercial property? Do we know open business hours of the address? What if customer is not home? Is there an alternate delivery address? Does customer have any special preference? What are other addresses that also have packages to be delivered on the same day? Are we optimizing delivery associate’s route? Does delivery associate know locality well enough? Is there an access code to get inside building? And the list simply goes on. At the core of all of it lies quality of underlying data that can help make those decisions in time. The person in this role will be a strong influencer who will ensure goal alignment with Technology, Operations, and Finance teams. This role will serve as the face of the organization to global stakeholders. This position requires a results-oriented, high-energy, dynamic individual with both stamina and mental quickness to be able to work and thrive in a fast-paced, high-growth global organization. Excellent communication skills and executive presence to get in front of VPs and SVPs across Amazon will be imperative. Key Strategic Objectives: Amazon is seeking an experienced leader to own the vision for quality improvement through global address management programs. As a Business Intelligence Engineer of Amazon last mile quality team, you will be responsible for shaping the strategy and direction of customer-facing products that are core to the customer experience. As a key member of the last mile leadership team, you will continually raise the bar on both quality and performance. You will bring innovation, a strategic perspective, a passionate voice, and an ability to prioritize and execute on a fast-moving set of priorities, competitive pressures, and operational initiatives. You will partner closely with product and technology teams to define and build innovative and delightful experiences for customers. You must be highly analytical, able to work extremely effectively in a matrix organization, and have the ability to break complex problems down into steps that drive product development at Amazon speed. You will set the tempo for defect reduction through continuous improvement and drive accountability across multiple business units in order to deliver large scale high visibility/ high impact projects. You will lead by example to be just as passionate about operational performance and predictability as you will be about all other aspects of customer experience. The Successful Candidate Will Be Able To Effectively manage customer expectations and resolve conflicts that balance client and company needs. Develop process to effectively maintain and disseminate project information to stakeholders. Be successful in a delivery focused environment and determining the right processes to make the team successful. This opportunity requires excellent technical, problem solving, and communication skills. The candidate is not just a policy maker/spokesperson but drives to get things done. Possess superior analytical abilities and judgment. Use quantitative and qualitative data to prioritize and influence, show creativity, experimentation and innovation, and drive projects with urgency in this fast-paced environment. Partner with key stakeholders to develop the vision and strategy for customer experience on our platforms. Influence product roadmaps based on this strategy along with your teams. Support the scalable growth of the company by developing and enabling the success of the Operations leadership team. Serve as a role model for Amazon Leadership Principles inside and outside the organization Actively seek to implement and distribute best practices across the operation Key job responsibilities Metric Reporting, Deep Dive Analysis, Insight Generation, Ambiguous Problem Sizing and Solving, AB Testing and Measurement, ETL, Automation, Stakeholder Communication etc. A day in the life Customer address related BI analytics leveraging big data technologies to build impactful and scalable product features for Amazon's worldwide last mile delivery needs Basic Qualifications Bachelor's degree in math/statistics/engineering or other equivalent quantitative discipline 2+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, PowerBI, Quicksight, or similar tools Experience performing AB Testing, applying basic statistical methods (e.g. regression) to difficult business problems Experience with scripting language (e.g., Python, Java, or R) Experience building and maintaining basic data artifacts (e.g., ETL, data models, queries) Track record of generating key business insights based on deep dive and collaborating with stakeholders Preferred Qualifications Ready to join within 30 days is preferred Experience in designing and implementing custom reporting systems using automation tools Knowledge of data modeling and data pipeline design Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A2985571 Show more Show less
Posted 1 week ago
1.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Description AOP FC Analytics team manages a suite of MIS reporting published at a various regular frequency, productivity tools to bridge the current software challenges and serve all analytical needs of leadership team with data & analysis. The ideal candidate relishes working with large volumes of data, enjoys the challenge of highly complex business contexts, and, above all else, is passionate about data and analytics. The candidate is an expert with business intelligence tools and passionately partners with the business to identify strategic opportunities where data-backed insights drive value creation. An effective communicator, the candidate crisply translates analysis result into executive-facing business terms. The candidate works aptly with internal and external teams to push the projects across the finishing line. The candidate is a self-starter, comfortable with ambiguity, able to think big (while paying careful attention to detail), and enjoys working in a fast-paced and global team. Key job responsibilities Interfacing with business customers, gathering requirements and delivering complete BI solutions to drive insights and inform product, operations, and marketing decisions. Interfacing with other technology teams to extract, transform, and load data from a wide variety of data sources using SQL (Redshift, Oracle) and ability to use a programming and/or scripting language to process data for modeling Evolve organization wide Self-Service platforms Building metrics to analyze key inputs to forecasting systems Recognizing and adopting best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation A day in the life Solve analyses with well-defined inputs and outputs; drive to the heart of the problem and identify root causes Have the capability to handle large data sets in analysis Derive recommendations from analysis Understand the basics of test and control comparison; may provide insights through basic statistical measures such as hypothesis testing Communicate analytical insights effectively About The Team AOP (Analytics Operations and Programs) team is missioned to standardize BI and analytics capabilities, and reduce repeat analytics/reporting/BI workload for operations across IN, AU, BR, MX, SG, AE, EG, SA marketplace. AOP is responsible to provide visibility on operations performance and implement programs to improve network efficiency and defect reduction. The team has a diverse mix of strong engineers, Analysts and Scientists who champion customer obsession. We enable operations to make data-driven decisions through developing near real-time dashboards, self-serve dive-deep capabilities and building advanced analytics capabilities. We identify and implement data-driven metric improvement programs in collaboration (co-owning) with Operations teams. Basic Qualifications 1+ years of tax, finance or a related analytical field experience 2+ years of complex Excel VBA macros writing experience Bachelor's degree or equivalent Experience defining requirements and using data and metrics to draw business insights Experience with SQL or ETL Preferred Qualifications Experience working with Tableau Experience using very large datasets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - Amazon Dev Center India - Hyderabad Job ID: A3004510 Show more Show less
Posted 1 week ago
4.0 years
0 Lacs
Kochi, Kerala, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology Your Role And Responsibilities As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Preferred Education Master's Degree Required Technical And Professional Expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala ; Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Preferred Technical And Professional Experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers Show more Show less
Posted 1 week ago
0 years
0 Lacs
Andhra Pradesh, India
On-site
Design and develop end-to-end Master Data Management solutions using Informatica MDM Cloud Edition or on-prem hybrid setups. Implement match & merge rules, survivorship, hierarchy management, and data stewardship workflows. Configure landing, staging, base objects, mappings, cleanse functions, match rules, and trust/survivorship rules. Integrate MDM with cloud data lakes/warehouses (e.g., Snowflake, Redshift, Synapse) and business applications. Design batch and real-time integration using Informatica Cloud (IICS), APIs, or messaging platforms. Work closely with data architects and business analysts to define MDM data domains (e.g., Customer, Product, Vendor). Ensure data governance, quality, lineage, and compliance standards are followed. Provide production support and enhancements to existing MDM solutions. Create and maintain technical documentation, test cases, and deployment artifacts. Show more Show less
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The job market for redshift professionals in India is growing rapidly as more companies adopt cloud data warehousing solutions. Redshift, a powerful data warehouse service provided by Amazon Web Services, is in high demand due to its scalability, performance, and cost-effectiveness. Job seekers with expertise in redshift can find a plethora of opportunities in various industries across the country.
The average salary range for redshift professionals in India varies based on experience and location. Entry-level positions can expect a salary in the range of INR 6-10 lakhs per annum, while experienced professionals can earn upwards of INR 20 lakhs per annum.
In the field of redshift, a typical career path may include roles such as: - Junior Developer - Data Engineer - Senior Data Engineer - Tech Lead - Data Architect
Apart from expertise in redshift, proficiency in the following skills can be beneficial: - SQL - ETL Tools - Data Modeling - Cloud Computing (AWS) - Python/R Programming
As the demand for redshift professionals continues to rise in India, job seekers should focus on honing their skills and knowledge in this area to stay competitive in the job market. By preparing thoroughly and showcasing their expertise, candidates can secure rewarding opportunities in this fast-growing field. Good luck with your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.