Home
Jobs

1869 Redshift Jobs - Page 25

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0.0 - 4.0 years

0 Lacs

Chennai, Tamil Nadu

Remote

Indeed logo

Senior ETL Developer (Talend + PostgreSQL) – Immediate Joiner Preferred Experience: 5-8 years Project : Us client based project Remote or hybrid We are looking for an experienced and proactive ETL Developer with 5–8 years of hands-on experience in Talend and PostgreSQL , who can contribute individually and guide a team in managing and optimizing data workflows. The ideal candidate will also support and collaborate with peers using a tool referred to as Quilt or Quilt Talend . Key Responsibilities: ETL Development : Design, build, and maintain scalable ETL pipelines using Talend. Data Integration : Seamlessly integrate structured and unstructured data from diverse sources. PostgreSQL Expertise : Strong experience in PostgreSQL for data warehousing, performance tuning, indexing, and large dataset operations. Team Guidance : Act as a technical lead to guide junior developers and ensure best practices in ETL processes. Tool Expertise (Quilt Talend or Quilt Tool) : Support team members in using Quilt —a platform used to manage and version data for ML and analytics pipelines. Linux & Scripting : Write automation scripts in Linux for batch processing and monitoring. AWS Cloud Integration : Experience integrating Talend with AWS services such as S3, RDS (PostgreSQL), Glue, or Redshift. Troubleshooting : Proactively identify bottlenecks or issues in ETL jobs and ensure data accuracy and uptime. Collaboration : Work closely with data analysts, scientists, and stakeholders to deliver end-to-end solutions. Must-Have Skills: Strong knowledge of Talend (Open Studio / Data Integration / Big Data Edition) . 3+ years of hands-on experience with PostgreSQL . Familiarity with Quilt Data Tool (https://quiltdata.com/) or similar data versioning tools. Solid understanding of cloud ETL environments, especially AWS . Strong communication and leadership skills. Nice-to-Have: Familiarity with Oracle for legacy systems. Knowledge of data governance and security best practices. Experience integrating Talend with APIs or external services. Additional Info: Location : [Chennai, Madurai/Tamil Nadu / Remote / Hybrid] Joining : Immediate joiners preferred Job Type : Full-time / Contract Job Types: Full-time, Contractual / Temporary Pay: ₹700,000.00 - ₹1,200,000.00 per year Benefits: Work from home Schedule: Evening shift Monday to Friday Rotational shift US shift Weekend availability Application Question(s): Are you willing to work as hybrid from Chennai or Madurai Experience: EDL : 4 years (Required) Location: Chennai, Tamil Nadu (Preferred) Shift availability: Night Shift (Required) Overnight Shift (Required) Work Location: In person

Posted 1 week ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

About The Role Grade Level (for internal use): 10 The Team As a member of the Data Transformation team you will work on building ML powered products and capabilities to power natural language understanding, data extraction, information retrieval and data sourcing solutions for S&P Global Market Intelligence and our clients. You will spearhead development of production-ready AI products and pipelines while leading-by-example in a highly engaging work environment. You will work in a (truly) global team and encouraged for thoughtful risk-taking and self-initiative. The Impact The Data Transformation team has already delivered breakthrough products and significant business value over the last 3 years. In this role you will be developing our next generation of new products while enhancing existing ones aiming at solving high-impact business problems. What’s In It For You Be a part of a global company and build solutions at enterprise scale Collaborate with a highly skilled and technically strong team Contribute to solving high complexity, high impact problems Key Responsibilities Build production ready data acquisition and transformation pipelines from ideation to deployment Being a hands-on problem solver and developer helping to extend and manage the data platforms Apply best practices in data modeling and building ETL pipelines (streaming and batch) using cloud-native solutions What We’re Looking For 3-5 years of professional software work experience Expertise in Python and Apache Spark OOP Design patterns, Test-Driven Development and Enterprise System design Experience building data processing workflows and APIs using frameworks such as FastAPI, Flask etc. Proficiency in API integration, experience working with REST APIs and integrating external & internal data sources SQL (any variant, bonus if this is a big data variant) Linux OS (e.g. bash toolset and other utilities) Version control system experience with Git, GitHub, or Azure DevOps. Problem-solving and debugging skills Software craftsmanship, adherence to Agile principles and taking pride in writing good code Techniques to communicate change to non-technical people Nice to have Core Java 17+, preferably Java 21+, and associated toolchain DevOps with a keen interest in automation Apache Avro Apache Kafka Kubernetes Cloud expertise (AWS and GCP preferably) Other JVM based languages - e.g. Kotlin, Scala C# - in particular .NET Core Data warehouses (e.g., Redshift, Snowflake, BigQuery) What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 315685 Posted On: 2025-05-20 Location: Gurgaon, Haryana, India Show more Show less

Posted 1 week ago

Apply

9.0 years

6 - 7 Lacs

Chennai

On-site

GlassDoor logo

Total 9 years of experience with minimum 5 years of experience working as DBT administrator DBT Core Cloud Manage DBT projects models tests snapshots and deployments in both DBT Core and DBT Cloud Administer and manage DBT Cloud environments including users permissions job scheduling and Git integration Onboarding and enablement of DBT users on Dbt Cloud platform Work closely with users to support DBT adoption and usage SQL Warehousing Write optimized SQL and work with data warehouses like Snowflake BigQuery Redshift or Databricks Cloud Platforms Use AWS GCP or Azure for data storage eg S3 GCS compute and resource management Orchestration Tools Automate DBT runs using Airflow Prefect or DBT Cloud job scheduling Version Control CI CD Integrate DBT with Git and manage CI CD pipelines for model promotion and testing Monitoring Logging Track job performance and errors using tools like dbt-artifacts, Datadog, or cloud-native logging Access Security Configure IAM roles secrets and permissions for secure DBT and data warehouse access Documentation Collaboration Maintain model documentation use dbt docs and collaborate with data teams About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 1 week ago

Apply

5.0 - 10.0 years

0 Lacs

Noida

On-site

GlassDoor logo

Noida, Uttar Pradesh, India;Indore, Madhya Pradesh, India;Bangalore, Karnataka, India;Hyderabad, Telangana, India;Gurgaon, Haryana, India Qualification : Required Proven hands-on experience on designing, developing and supporting Database projects for analysis in a demanding environment. Proficient in database design techniques – relational and dimension designs Experience and a strong understanding of business analysis techniques used. High proficiency in the use of SQL or MDX queries. Ability to manage multiple maintenance, enhancement and project related tasks. Ability to work independently on multiple assignments and to work collaboratively within a team is required. Strong communication skills with both internal team members and external business stakeholders Added Advanatage Hadoop ecosystem or AWS, Azure or GCP Cluster and processing Experience working on Hive or Spark SQL or Redshift or Snowflake will be an added advantage. Experience of working on Linux system Experience of Tableau or Micro strategy or Power BI or any BI tools will be an added advantage. Expertise of programming in Python, Java or Shell Script would be a plus Role : Roles & Responsibilities Be frontend person of the world’s most scalable OLAP product company – Kyvos Insights. Interact with senior-most technical and business people of large enterprises to understand their big data strategy and their problem statements in that area. Create, present, align customers with and implement solutions around Kyvos products for the most challenging enterprise BI/DW problems. Be the Go-To person for customers regarding technical issues during the project. Be instrumental in reading the pulse of the big data market and defining the roadmap of the product. Lead a few small but highly efficient teams of Big data engineers Efficient task status reporting to stakeholders and customer. Good verbal & written communication skills Be willing to work on off hours to meet timeline. Be willing to travel or relocate as per project requirement Experience : 5 to 10 years Job Reference Number : 11078

Posted 1 week ago

Apply

4.0 - 6.0 years

6 - 7 Lacs

Noida

On-site

GlassDoor logo

Noida, Uttar Pradesh, India;Gurgaon, Haryana, India;Hyderabad, Telangana, India;Pune, Maharashtra, India;Indore, Madhya Pradesh, India;Bangalore, Karnataka, India Qualification : Description Strong hands-on experience in Python Having good experience on Spark/Spark Structure Streaming. Experience of working on MSK (Kafka) Kinesis. Ability to design, build and unit test applications on Spark framework on Python. Exposure to AWS cloud services such as Glue/EMR, RDS, SNS, SQS, Lambda, Redshift etc. Good experience of writing SQL queries Strong technical development experience in effectively writing code, code reviews, and best practices Ability to solve complex data-driven scenarios and triage towards defects and production issues Ability to learn-unlearn-relearn concepts with an open and analytical mindset Skills Required : Pyspark, SQL Role : Work closely with business and product management teams to develop and implement analytics solutions. Collaborate with engineers & architects to implement and deploy scalable solutions. Actively drive a culture of knowledge-building and sharing within the team Able to quickly adapt and learn Able to jump into an ambiguous situation and take the lead on resolution Good To Have: Experience of working on MSK (Kafka), Amazon Elastic Kubernetes Service and Docker Exposure on GitHub Actions, Argo CD, Argo Workflows Experience of working on Databricks Experience : 4 to 6 years Job Reference Number : 12555

Posted 1 week ago

Apply

5.0 - 7.0 years

7 - 9 Lacs

Noida

On-site

GlassDoor logo

Noida, Uttar Pradesh, India;Gurgaon, Haryana, India;Hyderabad, Telangana, India;Indore, Madhya Pradesh, India;Bangalore, Karnataka, India Qualification : 5-7 years of good hands on exposure with Big Data technologies – pySpark (Data frame and SparkSQL), Hadoop, and Hive Good hands on experience of python and Bash Scripts Good understanding of SQL and data warehouse concepts Strong analytical, problem-solving, data analysis and research skills Demonstrable ability to think outside of the box and not be dependent on readily available tools Excellent communication, presentation and interpersonal skills are a must Good to have: Hands-on experience with using Cloud Platform provided Big Data technologies (i.e. IAM, Glue, EMR, RedShift, S3, Kinesis) Orchestration with Airflow and Any job scheduler experience Experience in migrating workload from on-premise to cloud and cloud to cloud migrations Skills Required : Python, Pyspark, AWS Role : Develop efficient ETL pipelines as per business requirements, following the development standards and best practices. Perform integration testing of different created pipeline in AWS env. Provide estimates for development, testing & deployments on different env. Participate in code peer reviews to ensure our applications comply with best practices. Create cost effective AWS pipeline with required AWS services i.e S3,IAM, Glue, EMR, Redshift etc. Experience : 8 to 10 years Job Reference Number : 13025

Posted 1 week ago

Apply

14.0 years

4 - 8 Lacs

Noida

On-site

GlassDoor logo

Noida, Uttar Pradesh, India;Indore, Madhya Pradesh, India Qualification : We are seeking a highly experienced and dynamic Technical Project Manager to lead and manage our service engagements. The candidate will possess a strong technical ground, exceptional project management skills, and a proven track record of successfully delivering large-scale IT projects. You will be responsible for leading cross-functional teams, managing client relationships, and ensuring projects are delivered on time, within budget, and to the highest quality standards. 14+ years of experience in the role of managing and implementation of high-end software products, combined with technical knowledge in Business Intelligence (BI) and Data Engineering domains 5+ years of exeperience in project management with strong leadership and team management skills Hands-on with project management tools (e.g., Jira, Rally, MS Project) and strong expertise in Agile methodologies (certifications such as SAFe, CSM, PMP or PMI-ACP is a plus) Well versed with tracking project performance using appropriate metrics, tools and processes to successfully meet short/long term goals Rich experience interacting with clients, translating business needs into technical requirements, and delivering customer-focused solutions Exceptional verbal and written communication skills, with the ability to present complex concepts to techincal / non-technical stakeholders alike Strong understanding of BI concepts (reporting, analytics, data warehousing, ETL) leveraging expertise in tools such as Tableau, Power BI, Looker, etc. Knowledge of data modeling, database design, and data governance principles Proficiency in Data Engineering technologies (e.g., SQL, Python, cloud-based data solutions/platforms like AWS Redshift, Google BigQuery, Azure Synapse, Snowflake, Databricks) is a plus Skills Required : SAP BO, MicroStrategy, OBIEE Tableau, Power BI Role : This is a multi-dimensional and multi-functional role. You will need to be comfortable reporting program status to executives, as well as diving deep into technical discussions with internal engineering teams and external partners. Act as the primary point of contact for stakeholders and customers, gathering requirements, managing expectations, and delivering regular updates on project progress Manage and mentor cross-functional teams, fostering collaboration and ensuring high performance while meeting project milestones Drive Agile practices (e.g., Scrum, Kanban) to ensure iterative delivery, adaptability, and continuous improvement throughout the project lifecycle Identify, assess, and mitigate project risks, ensuring timely resolution of issues and adherence to quality standards. Maintain comprehensive project documentation, including status reports, roadmaps, and post-mortem analyses, to ensure transparency and accountability Define the project and delivery plan including defining scope, timelines, budgets, and deliverables for each assignment Capable of doing resource allocations as per the requirements for each assignment Experience : 14 to 18 years Job Reference Number : 12929

Posted 1 week ago

Apply

8.0 - 12.0 years

6 - 7 Lacs

Noida

On-site

GlassDoor logo

Noida, Uttar Pradesh, India;Bangalore, Karnataka, India;Gurugram, Haryana, India;Hyderabad, Telangana, India;Indore, Madhya Pradesh, India;Pune, Maharashtra, India Qualification : Do you love to work on bleeding-edge Big Data technologies, do you want to work with the best minds in the industry, and create high-performance scalable solutions? Do you want to be part of the team that is solutioning next-gen data platforms? Then this is the place for you. You want to architect and deliver solutions involving data engineering on a Petabyte scale of data, that solve complex business problems Impetus is looking for a Big Data Developer that loves solving complex problems, and architects and delivering scalable solutions across a full spectrum of technologies. Experience in providing technical leadership in the Big Data space (Hadoop Stack like Spark, M/R, HDFS, Hive, etc. Should be able to communicate with the customer in the functional and technical aspects Expert-level proficiency in Python/Pyspark Hands-on experience with Shell/Bash Scripting (creating, and modifying scripting files) Control-M, AutoSys, Any job scheduler experience Experience in visualizing and evangelizing next-generation infrastructure in Big Data space (Batch, Near Real-time, Real-time technologies). Should be able to guide the team for any functional and technical issues Strong technical development experience in effectively writing code, code reviews, and best practices code refactoring. Passionate for continuous learning, experimenting, ing and contributing towards cutting-edge open-source technologies and software paradigms Good communication, problem-solving & interpersonal skills. Self-starter & resourceful personality with the ability to manage pressure situations. Capable of providing the design and Architecture for typical business problems. Exposure and awareness of complete PDLC/SDLC. Out of box thinker and not just limited to the work done in the projects. Must Have Experience with AWS(EMR, Glue, S3, RDS, Redshift, Glue) Cloud Certification Skills Required : AWS, Pyspark, Spark Role : valuate and recommend the Big Data technology stack best suited for customer needs. Design/ Architect/ Implement various solutions arising out of high concurrency systems Responsible for timely and quality deliveries Anticipate on technological evolutions Ensure the technical directions and choices. Develop efficient ETL pipelines through spark or Hive. Drive significant technology initiatives end to end and across multiple layers of architecture Provides strong technical leadership in adopting and contributing to open-source technologies related to Big Data across multiple engagements Designing /architecting complex, highly available, distributed, failsafe compute systems dealing with a considerable amount (GB/TB) of data Identify and work on incorporating Non-functional requirements into the solution (Performance, scalability, monitoring etc.) Experience : 8 to 12 years Job Reference Number : 12400

Posted 1 week ago

Apply

3.0 - 6.0 years

6 - 10 Lacs

Noida

On-site

GlassDoor logo

Noida/ Indore/ Bangalore;Bangalore, Karnataka, India;Indore, Madhya Pradesh, India;Gurugram, Haryana, India Qualification : OLAP, Data Engineering, Data warehousing, ETL Hadoop ecosystem or AWS, Azure or GCP Cluster and processing Experience working on Hive or Spark SQL or Redshift or Snowflake Experience in writing and troubleshooting SQL programming or MDX queries Experience of working on Linux Experience in Microsoft Analysis services (SSAS) or OLAP tools Tableau or Micro strategy or any BI tools Expertise of programming in Python, Java or Shell Script would be a plus Skills Required : OLPA, MDX, SQL Role : Be frontend person of the world’s most scalable OLAP product company – Kyvos Insights. Interact with senior-most technical and business people of large enterprises to understand their big data strategy and their problem statements in that area. Create, present, align customers with and implement solutions around Kyvos products for the most challenging enterprise BI/DW problems. Be the Go-To person for prospects regarding technical issues during POV stage. Be instrumental in reading the pulse of the big data market and defining the roadmap of the product. Lead a few small but highly efficient teams of Big data engineers Efficient task status reporting to stakeholders and customer. Good verbal & written communication skills Be willing to work on off hours to meet timeline. Be willing to travel or relocate as per project requirement Experience : 3 to 6 years Job Reference Number : 10350

Posted 1 week ago

Apply

6.0 - 8.0 years

6 - 7 Lacs

Noida

On-site

GlassDoor logo

Noida, Uttar Pradesh, India;Gurgaon, Haryana, India;Hyderabad, Telangana, India;Bangalore, Karnataka, India;Indore, Madhya Pradesh, India Qualification : 6-8 years of good hands on exposure with Big Data technologies – pySpark (Data frame and SparkSQL), Hadoop, and Hive Good hands on experience of python and Bash Scripts Good understanding of SQL and data warehouse concepts Strong analytical, problem-solving, data analysis and research skills Demonstrable ability to think outside of the box and not be dependent on readily available tools Excellent communication, presentation and interpersonal skills are a must Hands-on experience with using Cloud Platform provided Big Data technologies (i.e. IAM, Glue, EMR, RedShift, S3, Kinesis) Orchestration with Airflow and Any job scheduler experience Experience in migrating workload from on-premise to cloud and cloud to cloud migrations Good to have: Skills Required : Python, pyspark, SQL Role : Develop efficient ETL pipelines as per business requirements, following the development standards and best practices. Perform integration testing of different created pipeline in AWS env. Provide estimates for development, testing & deployments on different env. Participate in code peer reviews to ensure our applications comply with best practices. Create cost effective AWS pipeline with required AWS services i.e S3,IAM, Glue, EMR, Redshift etc. Experience : 6 to 8 years Job Reference Number : 13024

Posted 1 week ago

Apply

12.0 years

5 - 6 Lacs

Indore

On-site

GlassDoor logo

Indore, Madhya Pradesh, India Qualification : BTech degree in computer science, engineering or related field of study or 12+ years of related work experience 7+ years design & implementation experience with large scale data centric distributed applications Professional experience architecting, operating cloud-based solutions with good understanding of core disciplines like compute, networking, storage, security, databases etc. Good understanding of data engineering concepts like storage, governance, cataloging, data quality, data modeling etc. Good understanding about various architecture patterns like data lake, data lake house, data mesh etc. Good understanding of Data Warehousing concepts, hands-on experience working with tools like Hive, Redshift, Snowflake, Teradata etc. Experience migrating or transforming legacy customer solutions to the cloud. Experience working with services like AWS EMR, Glue, DMS, Kinesis, RDS, Redshift, Dynamo DB, Document DB, SNS, SQS, Lambda, EKS, Data Zone etc. Thorough understanding of Big Data ecosystem technologies like Hadoop, Spark, Hive, HBase etc. and other competent tools and technologies Understanding in designing analytical solutions leveraging AWS cognitive services like Textract, Comprehend, Rekognition etc. in combination with Sagemaker is good to have. Experience working with modern development workflows, such as git, continuous integration/continuous deployment pipelines, static code analysis tooling, infrastructure-as-code, and more. Experience with a programming or scripting language – Python/Java/Scala AWS Professional/Specialty certification or relevant cloud expertise Skills Required : AWS, Big Data, Spark, Technical Architecture Role : Drive innovation within Data Engineering domain by designing reusable and reliable accelerators, blueprints, and libraries. Capable of leading a technology team, inculcating innovative mindset and enable fast paced deliveries. Able to adapt to new technologies, learn quickly, and manage high ambiguity. Ability to work with business stakeholders, attend/drive various architectural, design and status calls with multiple stakeholders. Exhibit good presentation skills with a high degree of comfort speaking with executives, IT Management, and developers. Drive technology/software sales or pre-sales consulting discussions Ensure end-to-end ownership of all tasks being aligned. Ensure high quality software development with complete documentation and traceability. Fulfil organizational responsibilities (sharing knowledge & experience with other teams / groups) Conduct technical training(s)/session(s), write whitepapers/ case studies / blogs etc. Experience : 10 to 18 years Job Reference Number : 12895

Posted 1 week ago

Apply

2.0 - 3.0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

Linkedin logo

Job Description – Digital Transformation and Automation Lead About the Role - Drive the digital backbone of a growing commercial real-estate group. - You’ll prototype, test and ship automations that save our teams > 10 hours/week in the first 90 days Total Experience - 2-3 years Availability ~40 hrs/week, 4 days on-site, 1 day remote Core Responsibilities 1. Systems Audit & Consolidation – unify Google Workspace tenants, rationalise shared drives. 2. Database & CRM Build-out – design, deploy, and maintain occupant tracker and a lightweight CRM; migrate legacy data. 3. Automation & Integration – link CRM, Google Sheets, and Tally using Apps Script/Zoho Flow/Zapier. 4. Process Documentation – own the internal wiki; keep SOPs and RACI charts current. 5. Dashboards & Reporting – craft Looker Studio boards for collections, projects, facility KPIs. 6. User Training & Support – deliver monthly clinics; teach teams how to use G Suite, ChatGPT to improve productivity 7. Security & Compliance – enforce 2FA, backup policies, basic network hygiene. 8. Vendor Co-ordination – liaise with Zoho, Tally consultants, ISP/MSP vendors; manage small capex items. Required Skills & Experience Domain Skill Level Workspace & Security ★ LAN/Wi-Fi basics & device hardening Core Automation & Low-Code ★ Apps Script or Zoho Creator/Flow; REST APIs & webhooks Core ★ Workflow bridges (Zapier / Make / n8n) Core • Cursor, Loveable, or similar AI-driven low-code tools Bonus Data Extraction & Integrations ★ Document AI / OCR stack for PDF leases (Google DocAI, Textract, etc.) Core ★ Tally Prime ODBC/API Core CRM & Customer-360 ★ End-to-end rollout of a CRM (Zoho/Freshsales) (migration, custom modules) Core • Help-desk tooling (Zoho Desk, Freshdesk) Bonus Analytics & Reporting ★ Advanced Google Sheets (ARRAYFORMULA, QUERY, IMPORTRANGE) and Looker Studio dashboards Core • Data-warehouse concepts (BigQuery/Redshift) for unified customer view Bonus Programming & Scripting ★ Python or Node.js for lightweight cloud functions / ETL Core ★ Prompt-engineering & Gen-AI APIs (OpenAI, Claude) for copilots Core Project & Knowledge Management • Trello (or equivalent Kanban) Bonus ★Notion / Google Sites for wiki & SOPs Core Soft Skills ★ Clear documentation & bilingual (English/Hindi) training; stakeholder comms Core Compensation - 40 – 50 k p.m Show more Show less

Posted 1 week ago

Apply

9.0 years

0 Lacs

Andhra Pradesh

On-site

GlassDoor logo

Data Engineer Must have 9+ years of experience in below mentioned skills. Must Have: Big Data Concepts Python(Core Python- Able to write code), SQL, Shell Scripting, AWS S3 Good to Have: Event-driven/AWA SQS, Microservices, API Development,Kafka, Kubernetes, Argo, Amazon Redshift, Amazon Aurora About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 1 week ago

Apply

2.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Description When you attract people who have the DNA of pioneers and the DNA of explorers, you build a company of like-minded people who want to invent. And that’s what they think about when they get up in the morning: how are we going to work backwards from customers and build a great service or a great product” – Jeff Bezos Amazon.com’s success is built on a foundation of customer obsession. Have you ever thought about what it takes to successfully deliver millions of packages to Amazon customers seamlessly every day like a clock work? In order to make that happen, behind those millions of packages, billions of decision gets made by machines and humans. What is the accuracy of customer provided address? Do we know exact location of the address on Map? Is there a safe place? Can we make unattended delivery? Would signature be required? If the address is commercial property? Do we know open business hours of the address? What if customer is not home? Is there an alternate delivery address? Does customer have any special preference? What are other addresses that also have packages to be delivered on the same day? Are we optimizing delivery associate’s route? Does delivery associate know locality well enough? Is there an access code to get inside building? And the list simply goes on. At the core of all of it lies quality of underlying data that can help make those decisions in time. The person in this role will be a strong influencer who will ensure goal alignment with Technology, Operations, and Finance teams. This role will serve as the face of the organization to global stakeholders. This position requires a results-oriented, high-energy, dynamic individual with both stamina and mental quickness to be able to work and thrive in a fast-paced, high-growth global organization. Excellent communication skills and executive presence to get in front of VPs and SVPs across Amazon will be imperative. Key Strategic Objectives: Amazon is seeking an experienced leader to own the vision for quality improvement through global address management programs. As a Business Intelligence Engineer of Amazon last mile quality team, you will be responsible for shaping the strategy and direction of customer-facing products that are core to the customer experience. As a key member of the last mile leadership team, you will continually raise the bar on both quality and performance. You will bring innovation, a strategic perspective, a passionate voice, and an ability to prioritize and execute on a fast-moving set of priorities, competitive pressures, and operational initiatives. You will partner closely with product and technology teams to define and build innovative and delightful experiences for customers. You must be highly analytical, able to work extremely effectively in a matrix organization, and have the ability to break complex problems down into steps that drive product development at Amazon speed. You will set the tempo for defect reduction through continuous improvement and drive accountability across multiple business units in order to deliver large scale high visibility/ high impact projects. You will lead by example to be just as passionate about operational performance and predictability as you will be about all other aspects of customer experience. The Successful Candidate Will Be Able To Effectively manage customer expectations and resolve conflicts that balance client and company needs. Develop process to effectively maintain and disseminate project information to stakeholders. Be successful in a delivery focused environment and determining the right processes to make the team successful. This opportunity requires excellent technical, problem solving, and communication skills. The candidate is not just a policy maker/spokesperson but drives to get things done. Possess superior analytical abilities and judgment. Use quantitative and qualitative data to prioritize and influence, show creativity, experimentation and innovation, and drive projects with urgency in this fast-paced environment. Partner with key stakeholders to develop the vision and strategy for customer experience on our platforms. Influence product roadmaps based on this strategy along with your teams. Support the scalable growth of the company by developing and enabling the success of the Operations leadership team. Serve as a role model for Amazon Leadership Principles inside and outside the organization Actively seek to implement and distribute best practices across the operation Key job responsibilities Metric Reporting, Deep Dive Analysis, Insight Generation, Ambiguous Problem Sizing and Solving, AB Testing and Measurement, ETL, Automation, Stakeholder Communication etc. A day in the life Customer address related BI analytics leveraging big data technologies to build impactful and scalable product features for Amazon's worldwide last mile delivery needs Basic Qualifications Bachelor's degree in math/statistics/engineering or other equivalent quantitative discipline 2+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, PowerBI, Quicksight, or similar tools Experience performing AB Testing, applying basic statistical methods (e.g. regression) to difficult business problems Experience with scripting language (e.g., Python, Java, or R) Experience building and maintaining basic data artifacts (e.g., ETL, data models, queries) Track record of generating key business insights based on deep dive and collaborating with stakeholders Preferred Qualifications Ready to join within 30 days is preferred Experience in designing and implementing custom reporting systems using automation tools Knowledge of data modeling and data pipeline design Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A2985571 Show more Show less

Posted 1 week ago

Apply

1.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Description AOP FC Analytics team manages a suite of MIS reporting published at a various regular frequency, productivity tools to bridge the current software challenges and serve all analytical needs of leadership team with data & analysis. The ideal candidate relishes working with large volumes of data, enjoys the challenge of highly complex business contexts, and, above all else, is passionate about data and analytics. The candidate is an expert with business intelligence tools and passionately partners with the business to identify strategic opportunities where data-backed insights drive value creation. An effective communicator, the candidate crisply translates analysis result into executive-facing business terms. The candidate works aptly with internal and external teams to push the projects across the finishing line. The candidate is a self-starter, comfortable with ambiguity, able to think big (while paying careful attention to detail), and enjoys working in a fast-paced and global team. Key job responsibilities Interfacing with business customers, gathering requirements and delivering complete BI solutions to drive insights and inform product, operations, and marketing decisions. Interfacing with other technology teams to extract, transform, and load data from a wide variety of data sources using SQL (Redshift, Oracle) and ability to use a programming and/or scripting language to process data for modeling Evolve organization wide Self-Service platforms Building metrics to analyze key inputs to forecasting systems Recognizing and adopting best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation A day in the life Solve analyses with well-defined inputs and outputs; drive to the heart of the problem and identify root causes Have the capability to handle large data sets in analysis Derive recommendations from analysis Understand the basics of test and control comparison; may provide insights through basic statistical measures such as hypothesis testing Communicate analytical insights effectively About The Team AOP (Analytics Operations and Programs) team is missioned to standardize BI and analytics capabilities, and reduce repeat analytics/reporting/BI workload for operations across IN, AU, BR, MX, SG, AE, EG, SA marketplace. AOP is responsible to provide visibility on operations performance and implement programs to improve network efficiency and defect reduction. The team has a diverse mix of strong engineers, Analysts and Scientists who champion customer obsession. We enable operations to make data-driven decisions through developing near real-time dashboards, self-serve dive-deep capabilities and building advanced analytics capabilities. We identify and implement data-driven metric improvement programs in collaboration (co-owning) with Operations teams. Basic Qualifications 1+ years of tax, finance or a related analytical field experience 2+ years of complex Excel VBA macros writing experience Bachelor's degree or equivalent Experience defining requirements and using data and metrics to draw business insights Experience with SQL or ETL Preferred Qualifications Experience working with Tableau Experience using very large datasets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - Amazon Dev Center India - Hyderabad Job ID: A3004510 Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

Kochi, Kerala, India

On-site

Linkedin logo

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology Your Role And Responsibilities As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Preferred Education Master's Degree Required Technical And Professional Expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala ; Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Preferred Technical And Professional Experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Andhra Pradesh, India

On-site

Linkedin logo

Design and develop end-to-end Master Data Management solutions using Informatica MDM Cloud Edition or on-prem hybrid setups. Implement match & merge rules, survivorship, hierarchy management, and data stewardship workflows. Configure landing, staging, base objects, mappings, cleanse functions, match rules, and trust/survivorship rules. Integrate MDM with cloud data lakes/warehouses (e.g., Snowflake, Redshift, Synapse) and business applications. Design batch and real-time integration using Informatica Cloud (IICS), APIs, or messaging platforms. Work closely with data architects and business analysts to define MDM data domains (e.g., Customer, Product, Vendor). Ensure data governance, quality, lineage, and compliance standards are followed. Provide production support and enhancements to existing MDM solutions. Create and maintain technical documentation, test cases, and deployment artifacts. Show more Show less

Posted 1 week ago

Apply

7.0 years

0 Lacs

Andhra Pradesh, India

On-site

Linkedin logo

About The Role We are seeking experienced Data Analysts to join our growing team. The ideal candidate will have a strong background in data analysis, complex SQL queries, and experience working within large-scale Data Warehouse environments. Familiarity with cloud technologies such as GCP or AWS is mandatory, and prior exposure to AWS EMR and Apache Airflow is highly desirable. ________________________________________ Key Responsibilities Perform deep data analysis to support business decision-making, reporting, and strategic initiatives. Write and optimize complex SQL queries for data extraction, transformation, and reporting across large, distributed datasets. Work extensively within data warehouse environments to design, test, and deliver data solutions. Collaborate with data engineers, business analysts, and stakeholders to understand requirements and translate them into technical deliverables. Analyze large, complex datasets to identify trends, patterns, and opportunities for business growth. Develop, maintain, and optimize ETL/ELT pipelines; familiarity with Apache Airflow for workflow orchestration is a plus. Work with cloud-native tools on GCP or AWS to manage and analyze data effectively. Support the development of data quality standards and ensure data integrity across all reporting platforms. Document data models, queries, processes, and workflows for knowledge sharing and scalability.____________________________________ Required Skills & Experience Minimum 7 years of professional experience in Data Analysis. Strong, demonstrable expertise in SQL, including writing, debugging, and optimizing complex queries. Solid experience working within a Data Warehouse environment (e.g., BigQuery, Redshift, Snowflake, etc.). Hands-on experience with GCP (BigQuery, Dataflow) or AWS (Redshift, Athena, S3, EMR). Knowledge of data modeling concepts, best practices, and data architecture principles. Understanding of ETL processes and tools; hands-on experience with Apache Airflow is a strong plus. Strong analytical thinking, attention to detail, and problem-solving skills. Ability to work in a fast-paced environment and manage multiple priorities. Show more Show less

Posted 1 week ago

Apply

3.0 - 5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

We are Allvue Systems, the leading provider of software solutions for the Private Capital and Credit markets. Whether a client wants an end-to-end technology suite, or independently focused modules, Allvue helps eliminate the boundaries between systems, information, and people. We’re looking for ambitious, smart, and creative individuals to join our team and help our clients achieve their goals. Working at Allvue Systems means working with pioneers in the fintech industry. Our efforts are powered by innovative thinking and a desire to build adaptable financial software solutions that help our clients achieve even more. With our common goals of growth and innovation, whether you’re collaborating on a cutting-edge project or connecting over shared interests at an office happy hour, the passion is contagious. We want all of our team members to be open, accessible, curious and always learning. As a team, we take initiative, own outcomes, and have passion for what we do. With these pillars at the center of what we do, we strive for continuous improvement, excellent partnership and exceptional results. Come be a part of the team that’s revolutionizing the alternative investment industry. Define your own future with Allvue Systems! Data Pipeline Management Design, develop, and maintain scalable data pipelines Implement ELT processes to ingest data from various internal and external sources Ensure data quality and reliability across all data pipelines Stakeholder Management Collaborate with key stakeholders (technical and non-technical) to understand data requirements and translate them into technical specifications Provide regular updates to stakeholders and ensure alignment with project/task goals Data Analysis and Transformation Develop scripts for data retrieval, transformation, and enrichment Utilize various tools and languages such as SQL, DBT, Python, Airbyte for data processing tasks Implement automation for repetitive data management tasks Collaboration and Communication Work closely with Data Analysts, Platform Owners, and other Engineering teams Participate in code reviews and promote best practices for data engineering Document data processes, systems, and standards Proficiency in SQL, Python, and/or other relevant programming languages Experience with DBT and similar data transformation platforms required Experience with Airflow or similar data orchestration tools required Familiarity with data warehouse solutions (e.g., Snowflake, Redshift) required Proven ability to work autonomously and manage workload effectively Proven experience working with cross-functional teams Familiarity with iPaaS solutions (Workato, Celigo, MuleSoft) a plus Experience with enterprise business applications (Salesforce, NetSuite, SuiteProjects Pro, Jira) a plus Knowledge of cloud platforms (e.g., AWS, GCP, Azure) and related services a plus 3-5 years of experience in data engineering or related field Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

About The Role Grade Level (for internal use): 10 The Team As a member of the Data Transformation team you will work on building ML powered products and capabilities to power natural language understanding, data extraction, information retrieval and data sourcing solutions for S&P Global Market Intelligence and our clients. You will spearhead development of production-ready AI products and pipelines while leading-by-example in a highly engaging work environment. You will work in a (truly) global team and encouraged for thoughtful risk-taking and self-initiative. The Impact The Data Transformation team has already delivered breakthrough products and significant business value over the last 3 years. In this role you will be developing our next generation of new products while enhancing existing ones aiming at solving high-impact business problems. What’s In It For You Be a part of a global company and build solutions at enterprise scale Collaborate with a highly skilled and technically strong team Contribute to solving high complexity, high impact problems Key Responsibilities Build production ready data acquisition and transformation pipelines from ideation to deployment Being a hands-on problem solver and developer helping to extend and manage the data platforms Apply best practices in data modeling and building ETL pipelines (streaming and batch) using cloud-native solutions What We’re Looking For 3-5 years of professional software work experience Expertise in Python and Apache Spark OOP Design patterns, Test-Driven Development and Enterprise System design Experience building data processing workflows and APIs using frameworks such as FastAPI, Flask etc. Proficiency in API integration, experience working with REST APIs and integrating external & internal data sources SQL (any variant, bonus if this is a big data variant) Linux OS (e.g. bash toolset and other utilities) Version control system experience with Git, GitHub, or Azure DevOps. Problem-solving and debugging skills Software craftsmanship, adherence to Agile principles and taking pride in writing good code Techniques to communicate change to non-technical people Nice to have Core Java 17+, preferably Java 21+, and associated toolchain DevOps with a keen interest in automation Apache Avro Apache Kafka Kubernetes Cloud expertise (AWS and GCP preferably) Other JVM based languages - e.g. Kotlin, Scala C# - in particular .NET Core Data warehouses (e.g., Redshift, Snowflake, BigQuery) What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 315685 Posted On: 2025-05-20 Location: Gurgaon, Haryana, India Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

Remote

Linkedin logo

About the Job : At Techjays, we are driving the future of artificial intelligence with a bold mission to empower businesses worldwide by helping them build AI solutions that transform industries. As an established leader in the AI space, we combine deep expertise with a collaborative, agile approach to deliver impactful technology that drives meaningful change. Our global team consists of professionals who have honed their skills at leading companies such as Google, Akamai, NetApp, ADP, Cognizant Consulting, and Capgemini. With engineering teams across the globe, we deliver tailored AI software and services to clients ranging from startups to large-scale enterprises. Be part of a company that’s pushing the boundaries of digital transformation. At Techjays, you’ll work on exciting projects that redefine industries, innovate with the latest technologies, and contribute to solutions that make a real-world impact. Join us on our journey to shape the future with AI. We’re looking for a skilled AI Implementation Engineer with a strong JavaScript/TypeScript background to help us build scalable AI-powered systems, with a particular focus on Retrieval-Augmented Generation (RAG) and LLM integrations. You will play a key role in developing intelligent applications that combine vector search, natural language processing, and LLM-driven reasoning, delivering real-time AI experiences to end users. You’ll work with full-stack engineers, AI researchers, and data teams to create seamless interfaces between front-end applications, back-end services, and AI models. We are seeking a Data Analytics Engineer to design, develop, and optimize data pipelines and analytical dashboards that drive key business decisions. The ideal candidate will have hands-on experience working with BI tools like Power BI and Tableau and a strong background in building scalable data pipelines in AWS, GCP, or Azure cloud environments.\ Experience Range: 3 to 6 years Primary skills : Power BI, Tableau, SQL, Data Modeling, Data Warehousing, ETL/ELT Pipelines, AWS Glue, AWS Redshift, GCP BigQuery, Azure Data Factory, Cloud Data Pipelines, DAX, Data Visualization, Dashboard Development Secondary Skills : Python, dbt, Apache Airflow, Git, CI/CD, DevOps for Data, Snowflake, Azure Synapse, Data Governance, Data Lineage, Apache Beam, Data Catalogs, Basic Machine Learning Concepts Work Location : Remote Key Responsibilities: Develop and maintain scalable, robust ETL/ELT data pipelines across structured and semi-structured data sources. Collaborate with data scientists, analysts, and business stakeholders to identify data requirements and transform them into efficient data models. Design and deliver interactive dashboards and reports using Power BI and Tableau. Implement data quality checks, lineage tracking, and monitoring solutions to ensure high reliability of data pipelines. Optimize SQL queries and BI reports for performance and scalability. Work with cloud-native tools in AWS (e.g., Glue, Redshift, S3), GCP (e.g., BigQuery, Dataflow), or Azure (e.g., Data Factory, Synapse). Automate data integration and visualization workflows. Required Qualifications: Bachelor's or Master’s degree in Computer Science, Information Systems, Data Science, or a related field. 3+ years of experience in data engineering or data analytics roles. Proven experience with Power BI and Tableau – including dashboard design, DAX, calculated fields, and data blending. Proficiency in SQL and experience in data modeling and relational database design. Hands-on experience with data pipelines and orchestration using tools like Airflow, dbt, Apache Beam, or native cloud tools. Experience working with one or more cloud platforms – AWS, GCP, or Azure. Strong understanding of data warehousing concepts and tools such as Snowflake, BigQuery, Redshift, or Synapse. Preferred Skills: Experience with scripting in Python or Java for data processing. Familiarity with Git, CI/CD, and DevOps for data pipelines. Exposure to data governance, lineage, and catalog tools. Basic understanding of ML pipelines or advanced analytics is a plus. What We Offer: Competitive salary and benefits. Opportunity to work with modern cloud-native data stack. Collaborative, innovative, and data-driven work environment. Flexible working hours and remote work options. Show more Show less

Posted 1 week ago

Apply

8.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

About Analytix: Businesses of all sizes are faced with a rapidly changing competitive environment. Companies that possess both the ability to successfully navigate obstacles and the agility to react to market conditions are better positioned for long-term success. Analytix Solutions helps your company tackle these types of challenges. We empower business owners to confidently make informed decisions and positively impact profitability. We are a single-source provider of integrated solutions across multiple functional areas and disciplines. Through a combination of cross-disciplinary expertise, technological aptitude, and deep domain experience, we support our clients with efficient systems and processes, reliable data, and industry insights. We are your partner in strategically scaling your business for growth Website- Small Business Accounting Bookkeeping, Medical Billing, Audio Visual, IT Outsourcing Services (analytix.com) LinkedIn : Analytix Solutions: Overview | LinkedIn Analytix Business Solutions (India) Pvt. Ltd. : My Company | LinkedIn Job Description: Design, build, and maintain scalable, high-performance data pipelines and ETL/ELT processes across diverse database platforms. Architect and optimize data storage solutions to ensure reliability, security, and scalability. Leverage generative AI tools and models to enhance data engineering workflows, drive automation, and improve insight generation. Create and maintain comprehensive documentation for data systems, workflows, and models. Implement data modeling best practices and optimize data retrieval processes for better performance. Qualifications : Bachelor's or Master’s degree in Computer Science, Information Technology, or a related field. 5–8 years of experience in data engineering, designing and managing large-scale data systems. Strong expertise in database technologies, including: - SQL Databases: PostgreSQL, MySQL, SQL Server , NoSQL Databases: MongoDB, Cassandra , Data Warehouse/ Unified Platforms: Snowflake, Redshift, BigQuery, Microsoft Fabric Proficiency in Python and SQL, with experience in data processing frameworks (e.g., Pandas, PySpark). Experience with ETL tools (e.g., Apache Airflow, MS Fabric, Informatica, Talend) and data pipeline orchestration platforms. Strong understanding of data architecture, data modeling, and data governance principles. Experience with cloud platforms (preferably Azure) and associated data services. Show more Show less

Posted 1 week ago

Apply

5.0 - 7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Company Description Experian is a global data and technology company, powering opportunities for people and businesses around the world. We help to redefine lending practices, uncover and prevent fraud, simplify healthcare, create marketing solutions, and gain deeper insights into the automotive market, all using our unique combination of data, analytics and software. We also assist millions of people to realize their financial goals and help them save time and money. We operate across a range of markets, from financial services to healthcare, automotive, agribusiness, insurance, and many more industry segments. We invest in people and new advanced technologies to unlock the power of data. As a FTSE 100 Index company listed on the London Stock Exchange (EXPN), we have a team of 22,500 people across 32 countries. Our corporate headquarters are in Dublin, Ireland. Learn more at experianplc.com. Job Description The Data Engineer will be responsible for development of data software applications, data platforms and data products. This Engineer will closely work with application engineers, data engineers, data scientists and CRM to design, document, develop, test and implement solutions. This engineer will work with other engineers, and individually contribute to the development including defining engineering standards, data analysis, data modeling, coding, testing/QA, code review, deployment, operational readiness, and agile management. Be a key contributor to deliver, on budget, high value complex projects Take technical responsibility for all stages and/or iterations in a software component Specify and ensure the design and development of technology solutions properly fulfills all business requirements Ensure project stakeholders receive regular communications about status of the work Anticipate change management requirements and ensure effective solution adoption by ensuring appropriate knowledge transfer, training and deployment readiness Document design approaches, code artifacts, communicate to and participate in knowledge sharing Provide technical guidance to associates, colleagues or customers Communicate difficult concepts and negotiate with others to adopt a different point of view Interpret internal/external business challenges and recommend best-practices to improve products, processes, or services Ability to work in multi-project environment and support multiple internal departments You will report to Manager Qualifications 5 to 7 years of experience as an engineer in data platform centric environments Extensive knowledge of data platform paradigms and software architecture Experience with cloud development on the Amazon Web Services (AWS) platform with services including API Gateway, Lambda, EC2, ECS, SQS, SNS, Dynamo DB, Redshift and Aurora Expert-level Pyspark, Python & SQL skills Ability to comprehend and implement detailed project specifications as well as the ability to adapt to various technologies and simultaneously work on multiple projects Proficiency in CI/CD tools (Jenkins, GitLab, etc.) Additional Information Our uniqueness is that we celebrate yours. Experian's culture and people are important differentiators. We take our people agenda very seriously and focus on what matters; DEI, work/life balance, development, authenticity, collaboration, wellness, reward & recognition, volunteering... the list goes on. Experian's people first approach is award-winning; World's Best Workplaces™ 2024 (Fortune Top 25), Great Place To Work™ in 24 countries, and Glassdoor Best Places to Work 2024 to name a few. Check out Experian Life on social or our Careers Site to understand why. Experian is proud to be an Equal Opportunity and Affirmative Action employer. Innovation is an important part of Experian's DNA and practices, and our diverse workforce drives our success. Everyone can succeed at Experian and bring their whole self to work, irrespective of their gender, ethnicity, religion, colour, sexuality, physical ability or age. If you have a disability or special need that requires accommodation, please let us know at the earliest opportunity. Benefits Experian care for employee's work life balance, health, safety and wellbeing. In support of this endeavor, we offer the best family well-being benefits, Enhanced medical benefits and paid time off. Experian Careers - Creating a better tomorrow together Find out what its like to work for Experian by clicking here Show more Show less

Posted 1 week ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Good-day, We have immediate opportunity for AWS Devops Engineer Job Role: AWS Devops Engineer Job Location: Kharadi Pune Experience- 8 + Years Notice Period: Immediate to 30 Days About Company: At Synechron, we believe in the power of digital to transform businesses for the better. Our global consulting firm combines creativity and innovative technology to deliver industry-leading digital solutions. Synechron’s progressive technologies and optimization strategies span end-to-end Artificial Intelligence, Consulting, Digital, Cloud & DevOps, Data, and Software Engineering, servicing an array of noteworthy financial services and technology firms. Through research and development initiatives in our FinLabs we develop solutions for modernization, from Artificial Intelligence and Blockchain to Data Science models, Digital Underwriting, mobile-first applications and more. Over the last 20+ years, our company has been honoured with multiple employer awards, recognizing our commitment to our talented teams. With top clients to boast about, Synechron has a global workforce of 14,000+, and has 51 offices in 20 countries within key global markets. For more information on the company, please visit our website or LinkedIn community. Diversity, Equity, and Inclusion Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and an affirmative-action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference’ is committed to fostering an inclusive culture – promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant’s gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law. Roles/Responsibilities: Deployment, automation, management, and maintenance of AWS cloud-based production system. Ensuring availability, performance, security, and scalability of AWS production systems. Management of creation, release, and configuration of production systems. Evaluation of new technology alternatives and vendor products. System troubleshooting and problem resolution across various application domains and platforms. Pre-production acceptance testing for quality assurance. Provision of critical system security by leveraging best practices and prolific cloud security solutions. Providing recommendations for architecture and process improvements. Definition and deployment of systems for metrics, logging, and monitoring on AWS platform. Designing, maintenance and management of tools for automation of different operational processes. Develop policies, standards, and guidelines for IAC and CI/CD that teams can follow Tech Stack Operating systems: Linux OS. AWS - EKS , MSK , Open Search , RDS, Redshift, Glue, S3. Automation Tools : Terraform, Cloud formation CI Pipelines: Github Actions, Bamboo, CD Pipeline: ArgoCD, Versioning: Git Vulnerability tool: Snyk, Gitguardian, SonarQube. EKS orchestration: Rafay Monitoring: Prometheus, Grafana, Open Telemetry, Jaeger, AppDynamics and Splunk. Regards, Akshay Gurav Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Company Description North Hires is a premier consulting firm offering Custom Software Development, Recruitment, Sourcing, and Executive Search services. Our team of professionals provides tailored recruitment solutions to organizations across the USA, UK, India, and EMEA. We aim to empower businesses with top-tier human capital, fostering growth and success. Role Description This is a full-time, on-site role for a Data Engineer located in Hyderabad. The Data Engineer will be responsible for data engineering, data modelling, ETL processes, data warehousing, and data analytics on a day-to-day basis. Qualifications Bachelor’s degree in Computer Science, Information Technology, or a related field 3–5 years of experience in a Data Engineering or related role Strong hands-on experience with SQL and Python Experience working with cloud platforms like Azure , AWS , or Google Cloud Familiarity with tools like Azure Data Factory , Apache Airflow , or Power BI Solid understanding of ETL/ELT processes and data pipeline architecture Experience with Azure SQL , Synapse , Redshift , or other modern databases Version control experience (GitHub, Azure DevOps, etc.) Strong problem-solving and communication skills Ability to work independently and collaboratively in a hybrid setup Note: Candidates must be based in Hyderabad and willing to work from the office 2–3 times a week. Share your resumes at madiha@northhires.com Show more Show less

Posted 1 week ago

Apply

Exploring Redshift Jobs in India

The job market for redshift professionals in India is growing rapidly as more companies adopt cloud data warehousing solutions. Redshift, a powerful data warehouse service provided by Amazon Web Services, is in high demand due to its scalability, performance, and cost-effectiveness. Job seekers with expertise in redshift can find a plethora of opportunities in various industries across the country.

Top Hiring Locations in India

  1. Bangalore
  2. Hyderabad
  3. Mumbai
  4. Pune
  5. Chennai

Average Salary Range

The average salary range for redshift professionals in India varies based on experience and location. Entry-level positions can expect a salary in the range of INR 6-10 lakhs per annum, while experienced professionals can earn upwards of INR 20 lakhs per annum.

Career Path

In the field of redshift, a typical career path may include roles such as: - Junior Developer - Data Engineer - Senior Data Engineer - Tech Lead - Data Architect

Related Skills

Apart from expertise in redshift, proficiency in the following skills can be beneficial: - SQL - ETL Tools - Data Modeling - Cloud Computing (AWS) - Python/R Programming

Interview Questions

  • What is Amazon Redshift and how does it differ from traditional databases? (basic)
  • How does data distribution work in Amazon Redshift? (medium)
  • Explain the difference between SORTKEY and DISTKEY in Redshift. (medium)
  • How do you optimize query performance in Amazon Redshift? (advanced)
  • What is the COPY command in Redshift used for? (basic)
  • How do you handle large data sets in Redshift? (medium)
  • Explain the concept of Redshift Spectrum. (advanced)
  • What is the difference between Redshift and Redshift Spectrum? (medium)
  • How do you monitor and manage Redshift clusters? (advanced)
  • Can you describe the architecture of Amazon Redshift? (medium)
  • What are the best practices for data loading in Redshift? (medium)
  • How do you handle concurrency in Redshift? (advanced)
  • Explain the concept of vacuuming in Redshift. (basic)
  • What are Redshift's limitations and how do you work around them? (advanced)
  • How do you scale Redshift clusters for performance? (medium)
  • What are the different node types available in Amazon Redshift? (basic)
  • How do you secure data in Amazon Redshift? (medium)
  • Explain the concept of Redshift Workload Management (WLM). (advanced)
  • What are the benefits of using Redshift over traditional data warehouses? (basic)
  • How do you optimize storage in Amazon Redshift? (medium)
  • What is the difference between Redshift and Redshift Spectrum? (medium)
  • How do you troubleshoot performance issues in Amazon Redshift? (advanced)
  • Can you explain the concept of columnar storage in Redshift? (basic)
  • How do you automate tasks in Redshift? (medium)
  • What are the different types of Redshift nodes and their use cases? (basic)

Conclusion

As the demand for redshift professionals continues to rise in India, job seekers should focus on honing their skills and knowledge in this area to stay competitive in the job market. By preparing thoroughly and showcasing their expertise, candidates can secure rewarding opportunities in this fast-growing field. Good luck with your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies