Home
Jobs

1759 Redshift Jobs - Page 10

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Calling all innovators – find your future at Fiserv. We’re Fiserv, a global leader in Fintech and payments, and we move money and information in a way that moves the world. We connect financial institutions, corporations, merchants, and consumers to one another millions of times a day – quickly, reliably, and securely. Any time you swipe your credit card, pay through a mobile app, or withdraw money from the bank, we’re involved. If you want to make an impact on a global scale, come make a difference at Fiserv. Job Title Specialist, Software Development Engineering JD We are looking for a seasoned Data Engineer for designing and developing Data Applications powering various Operational and Analytical use cases. This is a full-time position with career growth opportunities and a competitive benefits package. If you want to work with leading technology & Cloud and help financial institutions and businesses worldwide solve complex business challenges every day, this is the right opportunity for you. What does a successful Data Engineer do at Fiserv? You will be responsible for developing to Data solutions on Cloud along with Product enhancements and new features. As a hands-on engineer, you will be working in an Agile development model in developing and maintaining Data solutions including Data ingestion, Transformation, and reporting. What You Will Do Responsible to drive Data and Analytical application development and maintenance Design and develop highly efficient Data engineering pipelines and Database systems leveraging Oracle/Snowflake, AWS, Java Demonstrate building highly efficient and performing Data applications catering to Business with higher data accuracy and faster response. Optimize performance, fixes bugs to improve Data availability and Accuracy. Create Technical Design documents. Collaborate with multiple teams to provide technical knowhow, solutions to complex business problems. Develop reusable assets, Create knowledge repository. What You Will Need To Have 7+ years of experience in designing and deploying Enterprise-level Data Applications Strong development/technical skills in PL/SQL, Oracle, Shell Scripting, Java, Spring Boot Experience in Cloud platforms like AWS and having experience in one of the Cloud Databases like Snowflake or Redshift Strong understanding and development skill in Data Transformations and Aggregations What Would Be Great To Have Experience in Java, Restful APIs Experience in handling Real-time data loading using Kafka would be advantage. You stay focused - you want to ship software that solves real problems for real people, now. You’re a professional – you understand that it’s not enough to write working code. It must also be well-designed, easy to test, and easy to add to over time. You’re learning – no matter how much you know, you are always seeking to learn more and to become a better engineer and leader. Thank You For Considering Employment With Fiserv. Please Apply using your legal name Complete the step-by-step profile and attach your resume (either is acceptable, both are preferable). Our Commitment To Diversity And Inclusion Fiserv is proud to be an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, gender, gender identity, sexual orientation, age, disability, protected veteran status, or any other category protected by law. Note To Agencies Fiserv does not accept resume submissions from agencies outside of existing agreements. Please do not send resumes to Fiserv associates. Fiserv is not responsible for any fees associated with unsolicited resume submissions. Warning About Fake Job Posts Please be aware of fraudulent job postings that are not affiliated with Fiserv. Fraudulent job postings may be used by cyber criminals to target your personally identifiable information and/or to steal money or financial information. Any communications from a Fiserv representative will come from a legitimate Fiserv email address. Show more Show less

Posted 4 days ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Description Intern- Data Solutions As an Intern- Data Solutions , you will be part of the Commercial Data Solutions team, providing technical/data expertise development of analytical data products to enable data science & analytics use cases. In this role, you will create and maintain data assets/domains used in the commercial/marketing analytics space – to develop best-in-class data pipelines and products, working closely with data product owners to translate data product requirements and user stories into development activities throughout all phases of design, planning, execution, testing, deployment and delivery. Your Specific Responsibilities Will Include Hands-on development of last-mile data products using the most up-to-date technologies and software / data / DevOps engineering practices Enable data science & analytics teams to drive data modeling and feature engineering activities aligned with business questions and utilizing datasets in an optimal way Develop deep domain expertise and business acumen to ensure that all specificalities and pitfalls of data sources are accounted for Build data products based on automated data models, aligned with use case requirements, and advise data scientists, analysts and visualization developers on how to use these data models Develop analytical data products for reusability, governance and compliance by design Align with organization strategy and implement semantic layer for analytics data products Support data stewards and other engineers in maintaining data catalogs, data quality measures and governance frameworks Education B.Tech / B.S., M.Tech / M.S. or PhD in Engineering, Computer Science, Engineering, Pharmaceuticals, Healthcare, Data Science, Business, or related field Required Experience High proficiency in SQL, Python and AWS Good understanding and comprehension of the requirements provided by Data Product Owner and Lead Analytics Engineer Understanding in creating / adopting data models to meet requirements from Marketing, Data Science, Visualization stakeholders Experience with including feature engineering Hands on with cloud-based (AWS / GCP / Azure) data management platforms and typical storage/compute services (Databricks, Snowflake, Redshift, etc.) Hands on with modern data stack tools such as Matillion, Starburst, ThoughtSpot and low-code tools (e.g. Dataiku) Excellent interpersonal and communication skills, with the ability to quickly establish productive working relationships with a variety of stakeholders Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status Intern/Co-op (Fixed Term) Relocation VISA Sponsorship Travel Requirements Flexible Work Arrangements Hybrid Shift Valid Driving License Hazardous Material(s) Required Skills Business Intelligence (BI), Data Management, Data Modeling, Data Visualization, Measurement Analysis, Stakeholder Relationship Management, Waterfall Model Preferred Skills Job Posting End Date 06/16/2025 A job posting is effective until 11 59 59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID R344334 Show more Show less

Posted 4 days ago

Apply

12.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Who we are...? REA India is a part of REA Group Ltd. of Australia (ASX: REA) (“REA Group”). It is the country’s leading full stack real estate technology platform that owns Housing.com and PropTiger.com. In December 2020, REA Group acquired a controlling stake in REA India. REA Group, headquartered in Melbourne, Australia, is a multinational digital advertising business specialising in property. It operates Australia’s leading residential and commercial property websites, realestate.com.au and realcommercial.com.au and owns leading portals in Hong Kong (squarefoot.com.hk) and China (myfun.com). REA Group also holds a significant minority shareholding in Move, Inc., operator of realtor.com in the US, and the PropertyGuru Group, operator of leading property sites in Malaysia, Singapore, Thailand, Vietnam and Indonesia. REA India is the only player in India that offers a full range of services in the real estate space, assisting consumers through their entire home seeking journey all the way from initial search and discovery to financing to the final step of transaction closure. It offers advertising and listings products to real estate developers, agents & homeowners, exclusive sales and marketing solutions to builders, data and content services, and personalized search, virtual viewing, site visits, negotiations, home loans and post- sales services to consumers for both buying and renting. With a 1600+ strong team, REA India has a national presence with 25+ offices across India with its corporate office located in Gurugram, Haryana. Housing.com Founded in 2012 and acquired by REA India in 2017, Housing.com is India’s most innovative real estate advertising platform for homeowners, landlords, developers, and real estate brokers. The company offers listings for new homes, resale homes, rentals, plots and co-living spaces in India. Backed by strong research and analytics, the company’s experts provide comprehensive real estate services that cover advertising and marketing, sales solutions for real estate developers, personalized search, virtual viewing, AR&VR content, home loans, end-to-end transaction services, and post-transaction services to consumers for both buying and renting. PropTiger.com PropTiger.com is among India’s leading digital real estate advisory firm offering a one-stop platform for buying residential real estate. Founded in 2011 with the goal to help people buy their dream homes, PropTiger.com leverages the power of information and the organisation’s deep rooted understanding of the real estate sector to bring simplicity, transparency and trust in the home buying process. PropTiger.com helps home-buyers through the entire home-buying process through a mix of technology-enabled tools as well as on-ground support. The company offers researched information about various localities and properties and provides guidance on matters pertaining to legal paperwork and loan assistance to successfully fulfil a transaction. Our Vision Changing the way India experiences property. Our Mission To be the first choice of our consumers and partners in discovering, renting, buying, selling, financing a home, and digitally enabling them throughout their journey. We do that with data, design, technology, and above all, the passion of our people while delivering value to our shareholders. Our Culture REA India being ranked 5th among the coveted list of India’s Best 100 Companies to Work For in 2024 by the Great Place to Work Institute®. REA India was also ranked among Top 5 workplaces list in 2023, the Top 25 workplaces list in 2022 and 2021, and the Top 50 workplaces list in 2019. Culture forms the core of our foundation and our effort towards creating an engaging workplace that has resulted Best WorkplaceTM in Building a Culture of Innovation by All in 2024 & 2023 and India’s Best In addition, REA India was also recognized as WorkplacesTM in Retail (e commerce category) for the fourth time in 2024. REA India is ranked 4th among Best Workplaces in Asia in 2023 and was ranked 55th in 2022, & 48th in 2021 apart from being recognized as Top 50 Best WorkplacesTM for Women in India in 2023 and 2021. REA India is also recognized as one of India's Top 50 Best Workplaces for Millennials in 2023 by Great Place to Work®. At REA India, we believe in creating a home for our people, where they feel a sense of belonging and purpose. By fostering a culture of inclusion and continuous learning and growth, every team member has the opportunity to thrive, embrace the spirit of being part of a global family, while contributing to revolutionize the way India experiences property. When you come to REA India, you truly COME HOME! REA India (Housing.com, PropTiger.com) is an equal opportunity employer and welcomes all qualified individuals to apply for employment. We are committed to creating an environment that is free from discrimination, harassment, and any other form of unlawful behavior. We value diversity and inclusion and do not discriminate against our people or applicants for employment based on age, color, gender, marital status, caste, religion, race, ethnic group, nationality, religious or political conviction, sexual orientation, gender identity, pregnancy, family responsibility, or disability or any other legally protected status. We firmly strive to eliminate any barriers that may impede equal opportunities while also recognizing that specific job roles may require appointees to possess the necessary qualifications, skills, abilities to perform essential functions of the position effectively. Our Tech Stack Java (Spring/Hibernate/JPA/REST), Ruby, Rails, Erlang, Python Javascript, NodeJS, AngularJS, Objective-C, React, Android AWS, Docker, Kubernetes, Microservices Architecture SaltStack, Ansible, Consul, Jenkins, Vault, Vagrant, VirtualBox, ELK Stack Varnish, Akamai, CloudFront, Apache, NginX, PWA, AMP Mysql, Aurora, Postgres, AWS RedShift, Mongo Redis, Aerospike, Memcache, ElasticSearch, Solr About the Role We are seeking a Head of Architecture to define and drive the end-to-end architecture strategy for REA India . This leadership role will focus on scalability, security, cloud optimization, and AI-driven innovation while mentoring teams and enhancing development efficiency. The role also requires collaborating with REA Group leaders to align with the global architectural strategy . Key Responsibilities Architectural Leadership Maintain Architectural Decision Records (ADR) to document key technical choices and their rationale Define and implement scalable, secure, and high-performance architectures across Housing and PropTiger Align technical decisions with business goals, leveraging microservices, distributed systems, and API-first design Cloud & DevOps Excellence Optimize cloud infrastructure (AWS/GCP) for cost, performance, and scalability Improve SEO performance by optimizing website architecture, performance, and indexing strategies Enhance CI/CD pipelines, automation, and Infrastructure as Code (IaC) to accelerate delivery Security & Compliance Establish and enforce security best practices for data protection, identity management, and compliance Strengthen security posture through proactive risk mitigation and governance Data & AI Strategy Architect data pipelines and AI-driven solutions to enable automation and data-driven decision-making Lead Generative AI initiatives to enhance product development and user experiences Incident Management & Operational Excellence Establish best practices for incident management, ensuring system reliability, rapid recovery, and root cause analysis Drive site reliability engineering (SRE) principles to improve uptime, observability, and performance monitoring Team Leadership & Mentorship Mentor engineering teams, fostering a culture of technical excellence, innovation, and continuous learning Collaborate with product and business leaders to align technology roadmaps with strategic objectives What We’re Looking For 12+ years in software architecture, cloud platforms (AWS/GCP), and large-scale system design Expertise in microservices, API design, DevOps, CI/CD, and cloud cost optimization Strong background in security best practices and governance Experience in Data Architecture, AI/ML pipelines, and Gen AI applications Proven leadership in mentoring and developing high-performing engineering teams Strong problem-solving, analytical, and cross-functional collaboration skills Why Join Us? Build and lead high-scale real estate tech products Drive cutting-edge AI and cloud innovations Mentor and shape the next generation of top engineering talent Show more Show less

Posted 4 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

I am thrilled to share an exciting opportunity with one of our esteemed clients! 🚀 Join me in exploring new horizons and unlocking potential. If you're ready for a challenge and growth,. Exp: 7+yrs Location: Chennai, Hyderabad Immediate joiner only, WFO Mandatory skills: SQL, Python, Pyspark, Databricks (strong in core databricks), AWS (AWS is mandate) JD: Manage and optimize cloud infrastructure (AWS, Databricks) for data storage, processing, and compute resources, ensuring seamless data operations. Implement data quality checks, validation rules, and transformation logic to ensure the accuracy, consistency, and reliability of data. Integrate data from multiple sources, ensuring data is accurately transformed and stored in optimal formats (e.g., Delta Lake, Redshift, S3). Automate data workflows using tools like Airflow, Databricks APIs, and other orchestration technologies to streamline data ingestion, processing, and reporting tasks. Regards R Usha usha@livecjobs.com Show more Show less

Posted 4 days ago

Apply

8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Description About Oracle APAC ISV Business Oracle APAC ISV team is one of the fastest-growing and high-performing business units in APAC. We are a prime team that operates to serve a broad range of customers across the APAC region. ISVs are at the forefront of today's fastest-growing industries. Much of this growth stems from enterprises shifting toward adopting cloud-native ISV SaaS solutions. This transformation drives ISVs to evolve from traditional software vendors to SaaS service providers. Industry analysts predict exponential growth in the ISV market over the coming years, making it a key growth pillar for every hyperscaler. Our Cloud engineering team works on pitch-to-production scenarios of bringing ISVs’ solutions on the Oracle cloud (#oci) with an aim to provide a cloud platform for running their business which is better performant, more flexible, more secure, compliant to open-source technologies and offers multiple innovation options yet being most cost effective. The team walks along the path with our customers and are being regarded as a trusted techno-business advisors by them. Required Skills/Experience Your versatility and hands-on expertise will be your greatest asset as you deliver on time bound implementation work items and empower our customers to harness the full power of OCI. We also look for: Bachelor's degree in Computer Science, Information Technology, or a related field. Relevant certifications in database management, OCI or other cloud platforms (AWS, Azure, Google Cloud), or NoSQL databases 8+ years of professional work experience Proven experience in migrating databases and data to OCI or other cloud environments (AWS, Azure, Google Cloud, etc.). Expertise on Oracle DB and related technologies like RMAN, DataGuard, Advanced Security Options, MAA Hands on experience with NoSQL databases (MongoDB / Cassandra / DynamoDB, etc.) and other DBs like MySQL/PostgreSQL Demonstrable expertise in Data Management systems, caching systems and search engines such as MongoDB, Redshift, Snowflake, Spanner, Redis, ElasticSearch, as well as Graph databases like Neo4J An understanding of complex data integration, data pipelines and stream analytics using products like Apache Kafka, Oracle GoldenGate, Oracle Stream Analytics, Spark etc. Knowledge of how to deploy data management within a Kubernetes/docker environment as well as the corresponding management of state in microservice applications is a plus Ability to work independently and handle multiple tasks in a fast-paced environment. Solid experience managing multiple implementation projects simultaneously while maintaining high-quality standards. Ability to develop and manage project timelines, resources, and budgets. Career Level - IC4 Responsibilities What You’ll Do As a solution specialist, you will work closely with our cloud architects and key stakeholders of ISVs to propagate awareness and drive implementation of OCI native as well as open-source technologies by ISV customers. Lead and execute end-to-end data platforms migrations (including heterogeneous data platforms) to OCI. Design and implement database solutions within OCI, ensuring scalability, availability, and performance. Set up, configure, and secure production environments for data platforms in OCI Migrate databases from legacy systems or other Clouds to OCI while ensuring minimal downtime and data integrity. Implement and manage CDC solutions to track and capture changes in databases in real-time. Configure and manage CDC tools, ensuring low-latency, fault-tolerant data replication for high-volume environments. Assist with the creation of ETL/data pipelines for the migration of large datasets into data warehouse on OCI Configure and manage complex database deployment topologies, including clustering, replication, and failover configurations. Perform database tuning, monitoring, and optimization to ensure high performance in production environments. Implement automation scripts and tools to streamline database administration and migration processes. Develop and effectively present your proposed solution and execution plan to both internal and external stakeholders. Clearly explain the technical advantages of OCI based database management systems About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less

Posted 4 days ago

Apply

3.0 years

0 Lacs

India

On-site

Linkedin logo

Who Are We Third Chair(YC X25) is building AI Agents for in-house legal teams. The team comprises of 2 second-time founders with past exits. Yoav previously cofounded social media analytics startup Trendpop(YC W21) that scaled to 1M in ARR in 16 months building a platform that processed millions of social posts per day. Shourya previously cofounded a consumer finance startup Fello(YC W22) that scaled to over 2 million users and managed over 600k monthly active users and over $250,000 in monthly investments. Third Chair is building vertical AI-native workflows for legal teams that help them complete end-to-end workflows that previously required 100s of hours. This is accomplished by building SOTA AI agents that browse the web, download and collect evidence, draft letters and more. We've grown 88% last month and went from 0 to 100k ARR in 5 months . More here. What Makes You a Good Fit 3+ years of hands-on experience developing production level Node.js/Typescript backends. Strong experience with structured DBMSs like PostgreSQL and OLAP databases like Redshift. Strong understanding of AWS services such as ECS, RDS, S3, CloudWatch, Elasticache. Experience working with telemetry, CI/CD and IaaC pipelines. Comfortable with US timezones. What Makes You a Great Fit Past experience with OpenAI APIs for completions, function calling and building context aware assistants. Past experience with Go routines. Building multi-agent systems using frameworks like CrewAI or such. Strong sense of cost optimization strategies, system design, and building efficient API stacks. Benefits Work from anywhere - We're a distributed team across multiple timezones with a focus on outputs instead of location or working hours. Generous PTO policy. Competitive pay bracket. Equity at a fast growing YC backed co in a disruptive market. Show more Show less

Posted 4 days ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Title: Lead Data Engineer – C12 / Assistant Vice President (India) The Role The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Data Engineer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology (4-year graduate course) 8 to 12 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills An inclination to mentor; an ability to lead and deliver medium sized components independently T echnical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in two or more data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Big Data: Experience of ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management: Expertise around Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design: Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages: Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps: Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Data Governance: A strong grasp of principles and practice including data quality, security, privacy and compliance Technical Skills (Valuable) Ab Initio: Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud: Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & Controls: Exposure to data validation, cleansing, enrichment and data controls Containerization: Fair understanding of containerization platforms like Docker, Kubernetes File Formats: Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others: Experience of using a Job scheduler e.g., Autosys. Exposure to Business Intelligence tools e.g., Tableau, Power BI Certification on any one or more of the above topics would be an advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less

Posted 4 days ago

Apply

3.0 - 5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Job Title: Pre-Sales Engineer - Cloud (AWS) Location : Noida, India (Hybdrid) Department: Sales/Engineering Reports To: Head of Sales Company Description Forasoftware, a trusted Microsoft and AWS partner, delivers comprehensive technology solutions that empower businesses across Ireland, the UK, India, and Türkiye. Our expertise spans Microsoft Azure, Microsoft 365, business intelligence, modern work, advanced security, helping organizations modernize IT, enhance collaboration, AWS and drive innovation. Forasoftware provides secure, scalable, and compliance-ready solutions tailored to your needs, ensuring you maximize your technology investments for growth and operational efficiency. Position Overview The Pre-sales Engineer, will be the technical bridge between our Sales Teams and their pre-sales customers. We are seeking a highly skilled and motivated Pre-Sales Engineer with expertise in Amazon Web Services (AWS) to join our dynamic team. The ideal candidate will have a strong technical background, excellent communication skills, and the ability to understand and address customer needs. Knowledge of Microsoft Azure products and relevant certifications will be considered a significant advantage. Experience Rich experience in delivering highest quality presales Support and Solution by bringing unique value on to the table for customers Strong understand and knowledge on AWS : Amazon EC2, AWS Lambda, Amazon Elastic Kubernetes Service (EKS) AWS : Amazon S3, Amazon EFS, Amazon Elastic Block Store (EBS) AWS : Amazon RDS, Amazon DynamoDB, Amazon Aurora AWS : Amazon VPC, Elastic Load Balancing (ELB), AWS Transit Gateway AWS : Amazon SageMaker, AWS AI Services (e.g., Amazon Rekognition, Amazon Lex) AWS : Amazon Redshift, Amazon Kinesis, AWS Glue AWS : Amazon CloudWatch, AWS Trusted Advisor, AWS Systems Manager Technical Expertise : Provide in-depth technical knowledge and support for AWS services, including but not limited to EC2, S3, RDS, and Lambda. Customer Engagement : Collaborate with the sales team to understand customer requirements and develop tailored solutions that address their needs. Solution Design : Design and present AWS-based solutions to customers, ensuring they meet both technical and business requirements. Demonstrations and POCs : Conduct product demonstrations and proof-of-concepts (POCs) to showcase the capabilities and benefits of AWS solutions. Documentation : Create and maintain technical documentation, including solution architectures, proposals, and presentations. Training and Enablement : Provide training and enablement sessions for customers and internal teams on AWS products and solutions. Competitive Analysis : Stay updated on industry trends, competitor products, and emerging technologies to provide insights and recommendations. Qualifications: Education : Bachelor's degree in Computer Science, Information Technology, or a related field. Experience : Minimum of 3-5 years of experience in a pre-sales or technical consulting role, with a focus on AWS. Certifications : AWS Certified Solutions Architect, AWS Certified DevOps Engineer, or other relevant AWS certifications. Bonus Points : Knowledge of Microsoft Azure products and certifications such as Azure Solutions Architect Expert or Azure DevOps Engineer Expert. Technical Skills : Proficiency in cloud architecture, networking, security, and automation. Experience with scripting languages such as Python or PowerShell is a plus. Soft Skills : Excellent communication, presentation, and interpersonal skills. Ability to work collaboratively in a team environment and manage multiple priorities. Show more Show less

Posted 4 days ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Description Are you passionate about data? Does the prospect of dealing with massive volumes of data excite you? Do you want to build data engineering solutions at scale ? If yes, this opportunity will appeal to you. We are actively seeking a talented Data Engineer to join our dynamic reporting and analytics team. We are looking for a highly motivated individual who is passionate about data, demonstrates strong autonomy, and has deep expertise in the design, creation, and management of large and complex data pipelines. Key job responsibilities Design and implement data pipelines and ETL processes Create scalable data models and data architectures Drive best practices for data engineering, testing, and documentation Ensure data quality, consistency, and compliance standards are met Collaborate with cross-functional teams on data-driven solutions Contribute to technical strategy and architectural decisions Basic Qualifications 5+ years of data engineering experience Experience with SQL Experience with data modeling, warehousing and building ETL pipelines Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS Knowledge of distributed systems as it pertains to data storage and computing Preferred Qualifications Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Knowledge of professional software engineering & best practices for full software development life cycle, including coding standards, software architectures, code reviews, source control management, continuous deployments, testing, and operational excellence Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ASSPL - Karnataka Job ID: A2971909 Show more Show less

Posted 4 days ago

Apply

6.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Description AWS Infrastructure Services owns the design, planning, delivery, and operation of all AWS global infrastructure. In other words, we’re the people who keep the cloud running. We support all AWS data centers and all of the servers, storage, networking, power, and cooling equipment that ensure our customers have continual access to the innovation they rely on. We work on the most challenging problems, with thousands of variables impacting the supply chain and we’re looking for talented people who want to help. You’ll join a diverse team of software, hardware, and network engineers, supply chain specialists, security experts, operations managers, and other vital roles. You’ll collaborate with people across AWS to help us deliver the highest standards for safety and security while providing seemingly infinite capacity at the lowest possible cost for our customers. And you’ll experience an inclusive culture that welcomes bold ideas and empowers you to own them to completion. Do you love problem solving? Are you looking for real world Supply Chain challenges? Do you have a desire to make a major contribution to the future, in the rapid growth environment of Cloud Computing? Amazon Web Services is looking for a highly motivated, analytical and detail oriented candidate to help build scalable, predictive and prescriptive business analytics solutions that supports AWS Supply Chain and Procurement organization. You will be part of the Supply Chain Analytics team working with Global Stakeholders, Data Engineers and Business Analysts to achieve our goals. The successful candidate will be a self-starter has a combination of superior analytical and technical abilities, business acumen, and written and verbal communication skills. Data-driven decision-making is at the core of Amazon’s culture. The ideal candidate has deep expertise in gathering requirements and insights, mining large and diverse data sets, data visualization, writing complex SQL queries, building rapid prototype using Python/ R and generating insights that enable senior leaders to make critical business decisions. The ideal candidate has experience providing guidance and support for other engineers with industry best practices and direction. They are comfortable with ambiguity and communicate clearly and effectively to all levels of the company, both in writing and in meetings. They are motivated to achieve results in a fast-paced environment. Key job responsibilities Understand a broad range of Amazon’s data resources and processes. Interface with Global Stakeholders, Data Engineers, and Business Analysts across time zones to gather requirements by asking right questions, analyzing data, and drawing conclusion by making and validating appropriate assumptions. Conduct deep dive analyses of business problems and formulate conclusions and recommendations; determine optimized courses of action to deliver comprehensive Analytical solutions. Enhance analytical maturity through predictive and prescriptive analytics using Machine Learning and Optimization techniques. Produce written recommendations and insights for key stakeholders to help shape solution design. Design, develop and maintain scalable and reliable analytical tools, dashboards, and metrics that drive key supply chain, and procurement decisions. Handle multiple projects at once, deal with ambiguity and rapidly-changing priorities. About The Team Diverse Experiences Amazon values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve. Inclusive Team Culture AWS values curiosity and connection. Our employee-led and company-sponsored affinity groups promote inclusion and empower our people to take pride in what makes us unique. Our inclusion events foster stronger, more collaborative teams. Our continual innovation is fueled by the bold ideas, fresh perspectives, and passionate voices our teams bring to everything we do. Mentorship and Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Basic Qualifications Bachelor’s degree in Engineering, Statistics, Computer Science, Mathematics, Economics, Data Science or related field. 6+ years’ hands-on analytics work experience, with proven quantitative orientation. 3+ years’ experience using business intelligence tools like Tableau, QuickSight, PowerBI etc and hands on experience in Python, SQL, Data Warehouse solutions and databases. Experience building measures and metrics, and developing reporting solutions Ability to think big, understand business strategy, provide consultative business analysis, and leverage technical skills to create insightful BI solutions. Preferred Qualifications Master’s degree in Data Science, Operations, Statistics from a premium institute, or MBA from premier business schools. 4+ years’ experience in Supply Chain Analytics, Data Science or related specialty. Experience with AWS technologies like Redshift, S3, Lambda, Glue. Experience in statistical computing using Python/ R. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADSIPL - Karnataka Job ID: A2991244 Show more Show less

Posted 4 days ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Description Amazon’s eCommerce Foundation (eCF) organization is responsible for the core components that drive the Amazon website and customer experience. Serving millions of customer page views and orders per day, eCF builds for scale. As an organization within eCF, the Business Data Technologies (BDT) group is no exception. We collect petabytes of data from thousands of data sources inside and outside Amazon including the Amazon catalog system, inventory system, customer order system, page views on the website. We provide interfaces for our internal customers to access and query the data hundreds of thousands of times per day, using Amazon Web Service’s (AWS) Redshift, Hive, Spark. We build scalable solutions that grow with the Amazon business. BDT team is building an enterprise-wide Big Data Marketplace leveraging AWS technologies. We work closely with AWS teams like EMR/Spark, Redshift, Athena, S3 and others. We are developing innovative products including the next-generation of data catalog, data discovery engine, data transformation platform, and more with state-of-the-art user experience. We’re looking for top engineers to build them from the ground up. This is a hands-on position where you will do everything from designing & building extremely scalable components to formulating strategy and direction for Big Data at Amazon. You will also mentor junior engineers and work with the most sophisticated customers in the business to help them get the best results. You need to not only be a top software developer with excellent programming skills, have an understanding of big data and parallelization, and a stellar record of delivery, but also excel at leadership, customer obsession and have a real passion for massive-scale computing. Come help us build for the future of Data! Key job responsibilities An SDE-II in the Datashield team would lead product and tech initiatives within the team and beyond by partnering with internal and external stakeholders and teams. They would need to come up with technical strategies and design for complex customer problems by leveraging out of box solutions to enable faster roll outs. They will deliver working software systems consisting of multiple features spanning the full software lifecycle including design, implementation, testing, deployment, and maintenance strategy. The problems they need to solve do not start with a defined technology strategy, and may have conflicting constraints. As technology lead in the team, they will review other SDEs’ work to ensure it fits into the bigger picture and is well designed, extensible, performant, and secure. Basic Qualifications 3+ years of non-internship professional software development experience 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience Experience programming with at least one software programming language Bachelor's degree in computer science or equivalent Preferred Qualifications 3+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience 1+ years of building large-scale machine-learning infrastructure for online recommendation, ads ranking, personalization or search experience Knowledge of professional software engineering & best practices for full software development life cycle, including coding standards, software architectures, code reviews, source control management, continuous deployments, testing, and operational excellence Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A2952490 Show more Show less

Posted 4 days ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Description “This role is part of the rekindle returnship program, “Note: For more details on rekindle program, pls visit - https://www.amazon.jobs/en/landing_pages/rekindle” Come be a part of a rapidly expanding $35 billion-dollar global business. At Amazon Business, a fast-growing startup passionate about building solutions, we set out every day to innovate and disrupt the status quo. We stand at the intersection of tech & retail in the B2B space developing innovative purchasing and procurement solutions to help businesses and organizations thrive. At Amazon Business, we strive to be the most recognized and preferred strategic partner for smart business buying. Bring your insight, imagination and a healthy disregard for the impossible. Join us in building and celebrating the value of Amazon Business to buyers and sellers of all sizes and industries. Unlock your career potential. The Amazon Business team is looking for candidates who are passionate about delivering an amazing experience to our international business customers. We focus on merging the customer experience, selection, pricing, and convenience that consumers have come to expect and love from Amazon with the features and functionality required by our business customers. As a Data Engineer in ABDAI team you will be working in one of the world's largest cloud-based data lakes. You should be skilled in the architecture of data warehouse solutions for the Enterprise using multiple platforms (EMR, RDBMS, Columnar, Cloud). You should have extensive experience in the design, creation, management, and business use of extremely large datasets. You should have excellent business and communication skills to be able to work with business owners to develop and define key business questions, and to build data sets that answer those questions. Above all you should be passionate about working with huge data sets and someone who loves to bring datasets together to answer business questions and drive change. We prefer candidates who can thrive in a fast paced, high energy and fun work environment where we deliver value incrementally and frequently. We value highly technical, hands-on, data driven engineers who know their subject matter deeply and are willing to learn new areas. We look for individuals who will set aside meaningful time to develop themselves and their teams as we continually learn from customers. Come join us as we continue to revolutionize procurement of goods for businesses around the world! About The Team Amazon Business Data Analytics and Insights (ABDAI) has two missions; (1) provide data that is accurate and reliable to accelerate business insights and data driven innovation in trustworthy, intuitive, and cost-efficient ways (2) predict and value customer actions for our business partners to be right a lot when taking decisions. ABDAI team ensures that we have the right inputs to measure our business performance. Data is the voice of our customers and we source it from hundreds of AB and Non AB platform/systems as well as 3P applications that customers interact with. We own curated source of truth datasets and infrastructure to AB users WW and access to our data to external consumers through secure means. We power outreach campaigns for Sales, Marketing and Product teams through the HOTW data integrations we built various 3rd party application that AB has adopted for our needs. Basic Qualifications 3+ years of data engineering experience Experience with data modeling, warehousing and building ETL pipelines Experience with SQL Preferred Qualifications Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ - H84 Job ID: A2972672 Show more Show less

Posted 4 days ago

Apply

5.0 years

0 Lacs

India

On-site

Linkedin logo

About Oportun Oportun (Nasdaq: OPRT) is a mission-driven fintech that puts its 2.0 million members' financial goals within reach. With intelligent borrowing, savings, and budgeting capabilities, Oportun empowers members with the confidence to build a better financial future. Since inception, Oportun has provided more than $16.6 billion in responsible and affordable credit, saved its members more than $2.4 billion in interest and fees, and helped its members save an average of more than $1,800 annually. Oportun has been certified as a Community Development Financial Institution (CDFI) since 2009. WORKING AT OPORTUN Working at Oportun means enjoying a differentiated experience of being part of a team that fosters a diverse, equitable and inclusive culture where we all feel a sense of belonging and are encouraged to share our perspectives. This inclusive culture is directly connected to our organization's performance and ability to fulfill our mission of delivering affordable credit to those left out of the financial mainstream. We celebrate and nurture our inclusive culture through our employee resource groups. Position Overview As a Sr. Data Engineer at Oportun, you will be a key member of our team, responsible for designing, developing, and maintaining sophisticated software / data platforms in achieving the charter of the engineering group. Your mastery of a technical domain enables you to take up business problems and solve them with a technical solution. With your depth of expertise and leadership abilities, you will actively contribute to architectural decisions, mentor junior engineers, and collaborate closely with cross-functional teams to deliver high-quality, scalable software solutions that advance our impact in the market. This is a role where you will have the opportunity to take up responsibility in leading the technology effort – from technical requirements gathering to final successful delivery of the product - for large initiatives (cross-functional and multi-month-long projects). Responsibilities Data Architecture and Design: Lead the design and implementation of scalable, efficient, and robust data architectures to meet business needs and analytical requirements. Collaborate with stakeholders to understand data requirements, build subject matter expertise, and define optimal data models and structures. Data Pipeline Development And Optimization Design and develop data pipelines, ETL processes, and data integration solutions for ingesting, processing, and transforming large volumes of structured and unstructured data. Optimize data pipelines for performance, reliability, and scalability. Database Management And Optimization Oversee the management and maintenance of databases, data warehouses, and data lakes to ensure high performance, data integrity, and security. Implement and manage ETL processes for efficient data loading and retrieval. Data Quality And Governance Establish and enforce data quality standards, validation rules, and data governance practices to ensure data accuracy, consistency, and compliance with regulations. Drive initiatives to improve data quality and documentation of data assets. Mentorship And Leadership Provide technical leadership and mentorship to junior team members, assisting in their skill development and growth. Lead and participate in code reviews, ensuring best practices and high-quality code. Collaboration And Stakeholder Management Collaborate with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand their data needs and deliver solutions that meet those needs. Communicate effectively with non-technical stakeholders to translate technical concepts into actionable insights and business value. Performance Monitoring And Optimization Implement monitoring systems and practices to track data pipeline performance, identify bottlenecks, and optimize for improved efficiency and scalability. Common Requirements You have a strong understanding of a business or system domain with sufficient knowledge & expertise around the appropriate metrics and trends. You collaborate closely with product managers, designers, and fellow engineers to understand business needs and translate them into effective solutions. You provide technical leadership and expertise, guiding the team in making sound architectural decisions and solving challenging technical problems. Your solutions anticipate scale, reliability, monitoring, integration, and extensibility. You conduct code reviews and provide constructive feedback to ensure code quality, performance, and maintainability. You mentor and coach junior engineers, fostering a culture of continuous learning, growth, and technical excellence within the team. You play a significant role in the ongoing evolution and refinement of current tools and applications used by the team, and drive adoption of new practices within your team. You take ownership of (customer) issues, including initial troubleshooting, identification of root cause and issue escalation or resolution, while maintaining the overall reliability and performance of our systems. You set the benchmark for responsiveness and ownership and overall accountability of engineering systems. You independently drive and lead multiple features, contribute to (a) large project(s) and lead smaller projects. You can orchestrate work that spans multiples engineers within your team and keep all relevant stakeholders informed. You support your lead/EM about your work and that of the team, that they need to share with the stakeholders, including escalation of issues Qualifications Bachelor's or Master's degree in Computer Science, Data Science, or a related field. 5+ years of experience in data engineering, with a focus on data architecture, ETL, and database management. Proficiency in programming languages like Python/PySpark and Java or Scala Expertise in big data technologies such as Hadoop, Spark, Kafka, etc. In-depth knowledge of SQL and experience with various database technologies (e.g., PostgreSQL, MariaDB, NoSQL databases). Experience and expertise in building complex end-to-end data pipelines. Experience with orchestration and designing job schedules using the CICD tools like Jenkins, Airflow or Databricks Ability to work in an Agile environment (Scrum, Lean, Kanban, etc) Ability to mentor junior team members. Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and their data services (e.g., AWS Redshift, S3, Azure SQL Data Warehouse). Strong leadership, problem-solving, and decision-making skills. Excellent communication and collaboration abilities. Familiarity or certification in Databricks is a plus. We are proud to be an Equal Opportunity Employer and consider all qualified applicants for employment opportunities without regard to race, age, color, religion, gender, national origin, disability, sexual orientation, veteran status or any other category protected by the laws or regulations in the locations where we operate. California applicants can find a copy of Oportun's CCPA Notice here: https://oportun.com/privacy/california-privacy-notice/. We will never request personal identifiable information (bank, credit card, etc.) before you are hired. We do not charge you for pre-employment fees such as background checks, training, or equipment. If you think you have been a victim of fraud by someone posing as us, please report your experience to the FBI’s Internet Crime Complaint Center (IC3). Show more Show less

Posted 4 days ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Description Amazon Selection and Catalog Systems (ASCS) builds the systems that host and run the comprehensive e-commerce product catalog. We power the online shopping experience for customers worldwide, enabling them to find, discover, and purchase anything they desire. Our scaled, distributed systems process hundreds of millions of updates across billions of products, including physical, digital, and service offerings. You will be part of Catalog Support Programs (CSP) team under Catalog Support Operations (CSO) in ASCS Org. CSP provides program management, technical support, and strategic initiatives to enhance the customer experience, owning the implementation of business logic and configurations for ASCS. We are establishing a new centralized Business Intelligence team to build self-service analytical products for ASCS that provide relevant insights and data deep dives across the business. By leveraging advanced analytics and AI/ML, we will transform catalog data into predictive insights, helping prevent customer issues before they arise. Real-time intelligence will support proactive decision-making, enabling faster, data-driven decisions across the organization and driving long-term growth and an enhanced customer experience. We are looking for a creative and goal-oriented BI Engineer to join our team to harness the full potential of data-driven insights to make informed decisions, identify business opportunities and drive business growth. This role requires an individual with excellent analytical abilities, knowledge of business intelligence solutions, as well as business acumen and the ability to work with various tech/product teams across ASCS. This BI Engineer will support ASCS org by owning complex reporting and automating reporting solutions, and ultimately provide insights and drivers for decision making. You must be a self-starter and be able to learn on the go. You should have excellent written and verbal communication skills to be able to work with business owners to develop and define key business questions, and to build data sets that answer those questions. As a Business Intelligence Engineer in the CSP team, you will be responsible for analyzing petabytes of data to identify business trends and points of customer friction, and developing scalable solutions to enhance customer experience and safety. You will work closely with internal stakeholders to define key performance indicators (KPIs), implement them into dashboards and reports, and present insights in a concise and effective manner. This role will involve collaborating with business and tech leaders within ASCS and cross-functional teams to solve problems, create operational efficiencies, and deliver against high organizational standards. You should be able to apply a breadth of tools, data sources, and analytical techniques to answer a wide range of high-impact business questions and proactively uncover new insights that drive decision-making by senior leadership. As a key member of the CSP team, you will continually raise the bar on both quality and performance. You will bring innovation, a strategic perspective, a passionate voice, and an ability to prioritize and execute on a fast-moving set of priorities, competitive pressures, and operational initiatives. There will be a steep learning curve, adding a fair amount of business skills to the individual. Key job responsibilities Work closely with BIEs, Data Engineers, and Scientists in the team to collaborate effectively with product managers and create scalable solutions for business problems Create program goals and related metrics, track progress, and manage through obstacles to help the team achieve objectives Identify opportunities for improvement or automation in existing data processes and lead the changes using business acumen and data handling skills Ensure best practices on data integrity, design, testing, implementation, documentation, and knowledge sharing Contribute to supplier operations strategy development based on data analysis Lead strategic projects to formalize and scale organizational processes Build and manage weekly, monthly, and quarterly business review metrics Build data reports and dashboards using SQL, Excel, and other tools to improve business efficiency across programs Understand loosely defined or structured problems and provide BI solutions for difficult problems, delivering large-scale BI solutions Provide solutions that drive the team's business decisions and highlight new opportunities Improve code quality and optimize BI processes Demonstrate proficiency in a scripting language, data modeling, data pipeline design, and applying basic statistical methods (e.g., regression) for difficult business problems A day in the life A day in the life of a BIE-II will include: Working closely with cross-functional teams including Product/Program Managers, Software Development Managers, Applied/Research/Data Scientists, and Software Developers Building dashboards, performing root cause analysis, and sharing actionable insights with stakeholders to enable data-informed decision making Leading reporting and analytics initiatives to drive data-informed decision making Designing, developing, and maintaining ETL processes and data visualization dashboards using Amazon QuickSight Transforming complex business requirements into actionable analytics solutions. About The Team This central BIE team within ASCS will be responsible for building a structured analytical data layer, bringing in BI discipline by defining metrics in a standardized way and establishing a single definition of metrics across the catalog ecosystem. They will also identify clear sources of truth for critical data. The team will build and maintain the data pipelines for critical projects tailored to the needs of ASCS teams, leveraging catalog data to provide a unified view of product information. This will support real-time decision-making and empower teams to make data-driven decisions quickly, driving innovation. This team will leverage advanced analytics that can shift us to a proactive, data-driven approach, enabling informed decisions that drive growth and enhance the customer experience. This team will adopt best practices, standardize metrics, and continuously iterate on queries and data sets as they evolve. Automated quality controls and real-time monitoring will ensure consistent data quality across the organization. Basic Qualifications 4+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with data modeling, warehousing and building ETL pipelines Experience in Statistical Analysis packages such as R, SAS and Matlab Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling Experience developing and presenting recommendations of new metrics allowing better understanding of the performance of the business Experience writing complex SQL queries Bachelor's degree in BI, finance, engineering, statistics, computer science, mathematics, finance or equivalent quantitative field Experience with scripting languages (e.g., Python, Java, R) and big data technologies/languages (e.g. Spark, Hive, Hadoop, PyTorch, PySpark) to build and maintain data pipelines and ETL processes Demonstrate proficiency in SQL, data analysis, and data visualization tools like Amazon QuickSight to drive data-driven decision making. Experience applying basic statistical methods (e.g. regression, t-test, Chi-squared) as well as exploratory, deterministic, and probabilistic analysis techniques to solve complex business problems. Experience gathering business requirements, using industry standard business intelligence tool(s) to extract data, formulate metrics and build reports. Track record of generating key business insights and collaborating with stakeholders. Strong verbal and written communication skills, with the ability to effectively present data insights to both technical and non-technical audiences, including senior management Preferred Qualifications Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Master's degree in BI, finance, engineering, statistics, computer science, mathematics, finance or equivalent quantitative field Proven track record of conducting large-scale, complex data analysis to support business decision-making in a data warehouse environment Demonstrated ability to translate business needs into data-driven solutions and vice versa Relentless curiosity and drive to explore emerging trends and technologies in the field Knowledge of data modeling and data pipeline design Experience with statistical analysis, co-relation analysis, as well as exploratory, deterministic, and probabilistic analysis techniques Experience in designing and implementing custom reporting systems using automation tools Knowledge of how to improve code quality and optimizes BI processes (e.g. speed, cost, reliability) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Karnataka Job ID: A2990532 Show more Show less

Posted 4 days ago

Apply

2.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Linkedin logo

Description Want to join the Earth’s most customer centric company? Do you like to dive deep to understand problems? Are you someone who likes to challenge Status Quo? Do you strive to excel at goals assigned to you? If yes, we have opportunities for you. Global Operations – Artificial Intelligence (GO-AI) at Amazon is looking to hire candidates who can excel in a fast-paced dynamic environment. Are you somebody that likes to use and analyze big data to drive business decisions? Do you enjoy converting data into insights that will be used to enhance customer decisions worldwide for business leaders? Do you want to be part of the data team which measures the pulse of innovative machine vision-based projects? If your answer is yes, join our team. GO-AI is looking for a motivated individual with strong skills and experience in resource utilization planning, process optimization and execution of scalable and robust operational mechanisms, to join the GO-AI Ops DnA team. In this position you will be responsible for supporting our sites to build solutions for the rapidly expanding GO-AI team. The role requires the ability to work with a variety of key stakeholders across job functions with multiple sites. We are looking for an entrepreneurial and analytical program manager, who is passionate about their work, understands how to manage service levels across multiple skills/programs, and who is willing to move fast and experiment often. Key job responsibilities Key Responsibilities Include Ability to maintain and refine straightforward ETL and write secure, stable, testable, maintainable code with minimal defects and automate manual processes. Proficiency in one or more industry analytics visualization tools (e.g. Excel, Tableau/Quicksight/PowerBI) and, as needed, statistical methods (e.g. t-test, Chi-squared) to deliver actionable insights to stakeholders. Building and owning small to mid-size BI solutions with high accuracy and on time delivery using data sets, queries, reports, dashboards, analyses or components of larger solutions to answer straightforward business questions with data incorporating business intelligence best practices, data management fundamentals, and analysis principles. Good understanding of the relevant data lineage: including sources of data; how metrics are aggregated; and how the resulting business intelligence is consumed, interpreted and acted upon by the business where the end product enables effective, data-driven business decisions. Having high responsibility for the code, queries, reports and analyses that are inherited or produced and having analyses and code reviewed periodically. Effective partnering with peer BIEs and others in your team to troubleshoot, research root causes, propose solutions, by either take ownership for their resolution or ensure a clear hand-off to the right owner. About The Team The Global Operations – Artificial Intelligence (GO-AI) team is an initiative, which remotely handles exceptions in the Amazon Robotic Fulfillment Centers Globally. GO-AI seeks to complement automated vision based decision-making technologies by providing remote human support for the subset of tasks which require higher cognitive ability and cannot be processed through automated decision making with high confidence. This team provides end-to-end solutions through inbuilt competencies of Operations and strong central specialized teams to deliver programs at Amazon scale. It is operating multiple programs including Nike IDS, Proteus, Sparrow and other new initiatives in partnership with global technology and operations teams. Basic Qualifications 2+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience building and maintaining basic data artifacts (e.g., ETL, data models, queries) Experience with one or more industry analytics visualization tools (e.g. Excel, Tableau, QuickSight, MicroStrategy, PowerBI) and statistical methods (e.g. t-test, Chi-squared) Experience with scripting language (e.g., Python, Java, or R) Experience applying basic statistical methods (e.g. regression) to difficult business problems Preferred Qualifications Master's degree, or Advanced technical degree Experience with statistical analysis, co-relation analysis Knowledge of how to improve code quality and optimizes BI processes (e.g. speed, cost, reliability) Excellence in technical communication with peers, partners, and non-technical cohorts Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - Amazon Development Centre (India) Private Limited Job ID: A2972707 Show more Show less

Posted 4 days ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Description Are you passionate about data? Does the prospect of dealing with massive volumes of data excite you? Do you want to build data engineering solutions that process billions of records a day in a scalable fashion using AWS technologies? Do you want to create the next-generation tools for intuitive data access? If so, Amazon Finance Technology (FinTech) is for you! FinTech is seeking a Data Engineer to join the team that is shaping the future of the finance data platform. The team is committed to building the next generation big data platform that will be one of the world's largest finance data warehouse to support Amazon's rapidly growing and dynamic businesses, and use it to deliver the BI applications which will have an immediate influence on day-to-day decision making. Amazon has culture of data-driven decision-making, and demands data that is timely, accurate, and actionable. Our platform serves Amazon's finance, tax and accounting functions across the globe. As a Data Engineer, you should be an expert with data warehousing technical components (e.g. Data Modeling, ETL and Reporting), infrastructure (e.g. hardware and software) and their integration. You should have deep understanding of the architecture for enterprise level data warehouse solutions using multiple platforms (RDBMS, Columnar, Cloud). You should be an expert in the design, creation, management, and business use of large data-sets. You should have excellent business and communication skills to be able to work with business owners to develop and define key business questions, and to build data sets that answer those questions. The candidate is expected to be able to build efficient, flexible, extensible, and scalable ETL and reporting solutions. You should be enthusiastic about learning new technologies and be able to implement solutions using them to provide new functionality to the users or to scale the existing platform. Excellent written and verbal communication skills are required as the person will work very closely with diverse teams. Having strong analytical skills is a plus. Above all, you should be passionate about working with huge data sets and someone who loves to bring data-sets together to answer business questions and drive change. Our ideal candidate thrives in a fast-paced environment, relishes working with large transactional volumes and big data, enjoys the challenge of highly complex business contexts (that are typically being defined in real-time), and, above all, is a passionate about data and analytics. In this role you will be part of a team of engineers to create world's largest financial data warehouses and BI tools for Amazon's expanding global footprint. Key job responsibilities Design, implement, and support a platform providing secured access to large datasets. Interface with tax, finance and accounting customers, gathering requirements and delivering complete BI solutions. Model data and metadata to support ad-hoc and pre-built reporting. Own the design, development, and maintenance of ongoing metrics, reports, analyses, dashboards, etc. to drive key business decisions. Recognize and adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation. Tune application and query performance using profiling tools and SQL. Analyze and solve problems at their root, stepping back to understand the broader context. Learn and understand a broad range of Amazon’s data resources and know when, how, and which to use and which not to use. Keep up to date with advances in big data technologies and run pilots to design the data architecture to scale with the increased data volume using AWS. Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for datasets. Triage many possible courses of action in a high-ambiguity environment, making use of both quantitative analysis and business judgment. Basic Qualifications - 3+ years of data engineering experience - Experience with data modeling, warehousing and building ETL pipelines - Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) - Experience with one or more scripting language (e.g., Python, KornShell) - Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions - Experience with data visualization software (e.g., AWS QuickSight or Tableau) or open-source project - Bachelor's degree, or Master's degree Preferred Qualifications 5+ years of data engineering experience Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - Amazon Dev Center India - Hyderabad Job ID: A2953275 Show more Show less

Posted 4 days ago

Apply

5.0 years

5 - 8 Lacs

Bengaluru

On-site

GlassDoor logo

- 5+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience - Experience with data visualization using Tableau, Quicksight, or similar tools - Experience with data modeling, warehousing and building ETL pipelines - Experience in Statistical Analysis packages such as R, SAS and Matlab - Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling The role of the Sub Same Day business is to provide ultrafast speeds (2 hour and same day scheduled) and reliable delivery for selection that customers fast. Customers find their daily essentials and a curated selection of Amazon’s top-selling items with sub same day promises. The program is highly cross-functional in nature, operations intensive and requires a number of India-first solutions to be created, which then need to be scaled WW. In this context, SSD is looking for a talented, driven and experienced Business Analyst It is a pivotal role that will contribute to the evolution and success of one of the fastest growing businesses in the company. Joining the Amazon team means partnering with a dynamic and creative group who set a high bar for innovation and success in a fast-paced and changing environment. The Business Analyst is responsible for being able to influence critical business decisions using data and providing insight to category teams to be able to act. The successful candidate needs to have : - A passion for numbers, data and challenges. - High attention to detail and proven ability to manage multiple, competing priorities simultaneously. - Excellent verbal and written communications skills. - An ability to work in a fast-paced, complex environment where continuous innovation is desired. - Bias for action and ownership. - A history of teamwork and willingness to roll up one’s sleeves to get the job done. - Ability to work with diverse teams and people across levels in an organization - Proven analytical and quantitative skills (includes the ability to effectively use tools such as Excel and SQL) and an ability to use hard data and metrics to back up assumptions and justify business decisions. Key job responsibilities Key Responsibilities - Influence business decisions with data. - Use data resources to accomplished assigned analytical tasks relating to critical business metrics. - Monitor key metrics and escalate anomalies as needed - Provide input on suggested business actions based on analytical findings. Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 4 days ago

Apply

2.0 years

1 - 1 Lacs

India

On-site

GlassDoor logo

Dear candidates we are hiring for Data Analyst' Qualification::Any Graduation Experience::2+years Location::Bangalore What We’re Looking For: ● 2–4 years of experience as a data analyst or similar role, preferably within a product based company ● Strong analytical and problem-solving skills with a knack for pattern recognition andanomaly detection. ● Proficiency in SQL, Python (Pandas, NumPy, Matplotlib /Seaborn), and data visualization tools (e.g. Meta base, Looker Studio). ● Comfortable working with analytical databases (e.g., Big Query, Redshift, Snowflake)to handle large-scale data operations efficiently ● Ability to communicate complex insights to both technical and non-technical stakeholders .● A proactive mindset and a passion for using data to drive meaningful change. ● Good to have - Hands-on experience with time-series or telemetry data, especially from IoT or connected vehicle systems and exposure to basic ML techniques Note:: Candidates from Manufacturing company will be more prefer Job Type: Full-time Pay: ₹120,000.00 - ₹180,000.00 per year Work Location: In person

Posted 4 days ago

Apply

13.0 years

3 - 7 Lacs

Chennai

On-site

GlassDoor logo

Company Description Company Description Organizations everywhere struggle under the crushing costs and complexities of “solutions” that promise to simplify their lives. To create a better experience for their customers and employees. To help them grow. Software is a choice that can make or break a business. Create better or worse experiences. Propel or throttle growth. Business software has become a blocker instead of ways to get work done. There’s another option. Freshworks. With a fresh vision for how the world works. At Freshworks, we build uncomplicated service software that delivers exceptional customer and employee experiences. Our enterprise-grade solutions are powerful, yet easy to use, and quick to deliver results. Our people-first approach to AI eliminates friction, making employees more effective and organizations more productive. Over 72,000 companies, including Bridgestone, New Balance, Nucor, S&P Global, and Sony Music, trust Freshworks’ customer experience (CX) and employee experience (EX) software to fuel customer loyalty and service efficiency. And, over 4,500 Freshworks employees make this possible, all around the world. Fresh vision. Real impact. Come build it with us. Job Description We are looking for a BI Architect with 13+ years of experience to lead the design and implementation of scalable BI and data architecture solutions. The role involves driving data modeling, cloud-based pipelines, migration projects, and data lake initiatives using technologies like AWS, Kafka, Spark, SQL, and Python. Experience with EDW modeling and architecture is a strong plus. Key Responsibilities Design and develop scalable BI and data models to support enterprise analytics. Lead data platform migration from legacy BI systems to modern cloud architectures. Architect and manage data lakes, batch and streaming pipelines, and real-time integrations via Kafka and APIs. Support data governance, quality, and access control initiatives. Partner with data engineers, analysts, and business stakeholders to deliver reliable, high-performing data solutions. Contribute to architecture decisions and platform scalability planning Qualifications Should have 13 - 19 years of relevant experience. 10+ years in BI, data engineering, or data architecture roles. Proficiency in SQL, Python, Apache Spark, and Kafka. Strong hands-on experience with AWS data services (e.g., S3, Redshift, Glue, EMR). Track record of leading data migration and modernization projects. Solid understanding of data governance, security, and scalable pipeline design. Excellent collaboration and communication skills. Good to Have Experience with enterprise data warehouse (EDW) modeling and architecture. Familiarity with BI tools like Power BI, Tableau, Looker, or Quicksight. Knowledge of lakehouse, data mesh, or modern data stack concepts. Additional Information At Freshworks, we are creating a global workplace that enables everyone to find their true potential, purpose, and passion irrespective of their background, gender, race, sexual orientation, religion and ethnicity. We are committed to providing equal opportunity for all and believe that diversity in the workplace creates a more vibrant, richer work environment that advances the goals of our employees, communities and the business.

Posted 4 days ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Position Summary Manager – Sr. DevSecOps, Product & Engineering (PxE) As a Sr. DevSecOps Engineer, you will actively engage in your engineering craft, taking a hands-on approach to multiple high-visibility projects. Your expertise will be pivotal in delivering solutions that delight customers and users, while also driving tangible value for Deloitte's business investments. You will leverage your extensive DevSecOps engineering craftsmanship and advanced proficiency across multiple programming languages, DevSecOps tools, and modern frameworks, consistently demonstrating your exemplary track record in delivering high-quality, outcome-focused CI/CD and automation solutions. The ideal candidate will be a role model and an engineering mentor, collaborating with cross-functional teams to design, develop, and deploy advanced software solutions Key Responsibilities: Outcome-Driven Accountability: Embrace and drive a culture of accountability for customer and business outcomes. Develop DevSecOps engineering solutions that solve complex automation problems with valuable outcomes, ensuring high-quality, lean, resilient and secure pipelines with low operating costs, meeting platform/technology KPIs. Technical Leadership and Advocacy: Serve as the technical advocate for DevSecOps modern practices, ensuring integrity, feasibility, and alignment with business and customer goals, NFRs, and applicable automation/integration/security practices— being r esponsible for designing and maintaining code repos, CI/CD pipelines, integrations (code quality, QE automation, security, etc.) and environments (sandboxes, dev, test, stage, production) through IaC, both for custom and package solutions, including identifying, assessing, and remediating vulnerabilities. Engineering Craftsmanship: Possess passion and experience as an individual contributor, responsible for the integrity and design of DevSecOps pipelines, environments, and technical resilience of implementations and design, while driving deployment techniques like Blue-Green, Canary to minimize down-time and enable A/B testing approaches. Be always hands-on and actively engage with engineers to ensure DevSecOps practices are understood and can be implemented, working with them closely during sprints, helping resolve any technical issues through to production operations (e.g., leading triage and troubleshooting production issues). Be self-driven to learn new technologies, experiment with engineers, and inspire the team to learn and drive application of those new technologies. Customer-Centric Engineering: Develop lean, and yet scalable and flexible, DevSecOps automations through rapid, inexpensive experimentation to solve customer needs, enabling version control, security, logging, feedback loops, continuous delivery, etc. Engage with customers and product teams to deliver the right solution for the product in the right way at the right time. Incremental and Iterative Delivery: Exhibit a mindset that favors action and evidence over extensive planning. Utilize a leaning-forward approach to navigate complexity and uncertainty, delivering lean, supportable, and maintainable solutions. Cross-Functional Collaboration and Integration: Work collaboratively with empowered, cross-functional teams including product management, experience, engineering, delivery, infrastructure, and security. Integrate diverse perspectives to make well-informed decisions that balance feasibility, viability, usability, and value. Foster a collaborative environment that enhances team synergy and innovation. Advanced Technical Proficiency: Possess deep expertise in modern software engineering practices and principles, including Agile methodologies, DevSecOps, Continuous Integration/Continuous Deployment. Act as a Role-Model, leveraging these techniques to optimize solutioning and product delivery, ensuring high-quality outcomes with minimal waste. Demonstrate proficiency in full lifecycle of product development, from conceptualization and design to implementation and scaling, with a focus on continuous improvement and learning. Domain Expertise: Quickly acquire domain-specific knowledge relevant to the business or product. Translate business/user needs into technical requirements and automations. Navigate various enterprise functions such as business and enabling areas as well as product, experience, engineering, delivery, infrastructure, and security to drive product value and feasibility as well as alignment with organizational goals. Effective Communication and Influence: Exhibit exceptional communication skills, capable of articulating complex technical concepts clearly and compellingly. Inspire and influence stakeholders at all levels through well-structured arguments and trade-offs supported by evidence, evaluations, and research. Create coherent narratives that align technical solutions with business objectives. Engagement and Collaborative Co-Creation: Engage and collaborate with stakeholders at all organizational levels, from team members to senior executives. Build and maintain constructive relationships, fostering a culture of co-creation and shared momentum towards achieving product goals. Align diverse perspectives and drive consensus to create feasible solutions Qualification Required: Education and Experience A bachelor’s degree in computer science, software engineering, or a related discipline. An advanced degree (e.g., MS) is preferred but not required. Experience is the most relevant factor. Excellent software engineering foundation with deep understanding of OOP/OOD, functional programming, data structures and algorithms, software design patterns, code instrumentations, etc 8+ years proven experience with Python, Bash, PowerShell, JavaScript, C#, and Golang (preferred). 8+ years proven experience with CI/CD tools (Azure DevOps and GitHub Enterprise) and Git (version control, branching, merging, handling pull requests) to automate build, test, and deployment processes. 5+ years of hands-on experience in security tools automation SAST/DAST (SonarQube, Fortify, Mend), monitoring/logging (Prometheus, Grafana, Dynatrace), and other cloud-native tools on AWS, Azure, and GCP. 5+ years of hands-on experience in using Infrastructure as Code (IaC) technologies like Terraform, Puppet, Kubernetes (K8s), Azure Resource Manager (ARM), AWS Cloud Formation, and Google Cloud Deployment Manager. 3+ years of hands-on experience with cloud native services like Data Lakes, CDN, API Gateways, Managed PaaS, Security, etc. on multiple cloud providers like AWS, Azure and GCP is preferred. 3+ years of experience with AI/ML and GenAI is preferred. Deep understanding of methodologies like, XP, Lean, SAFe to deliver high quality products rapidly. General understanding of cloud providers security practices, database technologies and maintenance (e.g. RDS, DynamoDB, Redshift, Aurora, Azure SQL, Google Cloud SQL) General knowledge of networking, firewalls, and load balancers.Excellent interpersonal and organizational skills, with the ability to handle diverse situations, complex projects, and changing priorities, behaving with passion, empathy, and care Location : Hyderabad Shift timing – 11AM to 8PM How You Will Grow At Deloitte, we have invested a great deal to create a rich environment in which our professionals can grow. We want all our people to develop in their own way, playing to their own strengths as they hone their leadership skills. And, as a part of our efforts, we provide our professionals with a variety of learning and networking opportunities—including exposure to leaders, sponsors, coaches, and challenging assignments—to help accelerate their careers along the way. No two people learn in the same way. So, we provide a range of resources including live classrooms, team-based learning, and eLearning. DU: The Leadership Center in India, our state-of-the-art, world-class learning Center in the Hyderabad offices is an extension of the Deloitte University (DU) in Westlake, Texas, and represents a tangible symbol of our commitment to our people’s growth and development. Explore DU: The Leadership Center in India Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Deloitte’s culture Our positive and supportive culture encourages our people to do their best work every day. We celebrate individuals by recognizing their uniqueness and offering them the flexibility to make daily choices that can help them to be healthy, centered, confident, and aware. We offer well-being programs and are continuously looking for new ways to maintain a culture that is inclusive, invites authenticity, leverages our diversity, and where our people excel and lead healthy, happy lives. Learn more about Life at Deloitte. Corporate citizenship Deloitte is led by a purpose: to make an impact that matters. This purpose defines who we are and extends to relationships with our clients, our people, and our communities. We believe that business has the power to inspire and transform. We focus on education, giving, skill-based volunteerism, and leadership to help drive positive social impact in our communities. Learn more about Deloitte’s impact on the world. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 302607 Show more Show less

Posted 4 days ago

Apply

0 years

0 Lacs

Kochi, Kerala, India

On-site

Linkedin logo

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology Your Role And Responsibilities As an Associate Software Developer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In This Role, Your Responsibilities May Include Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results Preferred Education Master's Degree Required Technical And Professional Expertise Experience in Big Data Technology like Hadoop, Apache Spark, Hive. Practical experience in Core Java (1.8 preferred) /Python/Scala. Having experience in AWS cloud services including S3, Redshift, EMR etc. Strong expertise in RDBMS and SQL. Good experience in Linux and shell scripting. Experience in Data Pipeline using Apache Airflow Preferred Technical And Professional Experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences Show more Show less

Posted 4 days ago

Apply

5.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Job Title: Senior Data Engineer – Data Quality, Ingestion & API Development Mandatory skill set - Python, Pyspark, AWS, Glue , Lambda, CI CD Total experience - 8+ Relevant experience - 8+ Work Location - Trivandrum /Kochi Candidates from Kerala and Tamil Nadu prefer more who are ready to relocate to above work locations. Candidates must be having an experience in lead role related to Data Engineer Job Overview We are seeking an experienced Senior Data Engineer to lead the development of a scalable data ingestion framework while ensuring high data quality and validation. The successful candidate will also be responsible for designing and implementing robust APIs for seamless data integration. This role is ideal for someone with deep expertise in building and managing big data pipelines using modern AWS-based technologies, and who is passionate about driving quality and efficiency in data processing systems. Key Responsibilities • Data Ingestion Framework: o Design & Development: Architect, develop, and maintain an end-to-end data ingestion framework that efficiently extracts, transforms, and loads data from diverse sources. o Framework Optimization: Use AWS services such as AWS Glue, Lambda, EMR, ECS , EC2 and Step Functions to build highly scalable, resilient, and automated data pipelines. • Data Quality & Validation: o Validation Processes: Develop and implement automated data quality checks, validation routines, and error-handling mechanisms to ensure the accuracy and integrity of incoming data. o Monitoring & Reporting: Establish comprehensive monitoring, logging, and alerting systems to proactively identify and resolve data quality issues. • API Development: o Design & Implementation: Architect and develop secure, high-performance APIs to enable seamless integration of data services with external applications and internal systems. o Documentation & Best Practices: Create thorough API documentation and establish standards for API security, versioning, and performance optimization. • Collaboration & Agile Practices: o Cross-Functional Communication: Work closely with business stakeholders, data scientists, and operations teams to understand requirements and translate them into technical solutions. o Agile Development: Participate in sprint planning, code reviews, and agile ceremonies, while contributing to continuous improvement initiatives and CI/CD pipeline development (using tools like GitLab). Required Qualifications • Experience & Technical Skills: o Professional Background: At least 5 years of relevant experience in data engineering with a strong emphasis on analytical platform development. o Programming Skills: Proficiency in Python and/or PySpark, SQL for developing ETL processes and handling large-scale data manipulation. o AWS Expertise: Extensive experience using AWS services including AWS Glue, Lambda, Step Functions, and S3 to build and manage data ingestion frameworks. o Data Platforms: Familiarity with big data systems (e.g., AWS EMR, Apache Spark, Apache Iceberg) and databases like DynamoDB, Aurora, Postgres, or Redshift. o API Development: Proven experience in designing and implementing RESTful APIs and integrating them with external and internal systems. o CI/CD & Agile: Hands-on experience with CI/CD pipelines (preferably with GitLab) and Agile development methodologies. • Soft Skills: o Strong problem-solving abilities and attention to detail. o Excellent communication and interpersonal skills with the ability to work independently and collaboratively. o Capacity to quickly learn and adapt to new technologies and evolving business requirements. Preferred Qualifications • Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field. • Experience with additional AWS services such as Kinesis, Firehose, and SQS. • Familiarity with data lakehouse architectures and modern data quality frameworks. • Prior experience in a role that required proactive data quality management and API- driven integrations in complex, multi-cluster environments. Candidate those who are Interested please drop your resume to: gigin.raj@greenbayit.com MOB NO - 8943011666 Show more Show less

Posted 4 days ago

Apply

9.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

📈 Experience: 9+ Years 📍 Location: Pune 📢 Immediate to 15 days and are highly encouraged to apply! 🔧 Primary Skills: Data Engineer, Lead, Architect, Python, SQL, Apache Airflow, Apache Spark, AWS (S3, Lambda, Glue) Job Overview We are seeking a highly skilled Data Architect / Data Engineering Lead with over 9 years of experience to drive the architecture and execution of large-scale, cloud-native data solutions. This role demands deep expertise in Python, SQL, Apache Spark, Apache Airflow , and extensive hands-on experience with AWS services. You will lead a team of engineers, design robust data platforms, and ensure scalable, secure, and high-performance data pipelines in a cloud-first environment. Key Responsibilities Data Architecture & Strategy Architect end-to-end data platforms on AWS using services such as S3, Redshift, Glue, EMR, Athena, Lambda, and Step Functions. Design scalable, secure, and reliable data pipelines and storage solutions. Establish data modeling standards, metadata practices, and data governance frameworks. Leadership & Collaboration Lead, mentor, and grow a team of data engineers, ensuring delivery of high-quality, well-documented code. Collaborate with stakeholders across engineering, analytics, and product to align data initiatives with business objectives. Champion best practices in data engineering, including reusability, scalability, and observability. Pipeline & Platform Development Develop and maintain scalable ETL/ELT pipelines using Apache Airflow , Apache Spark , and AWS Glue . Write high-performance data processing code using Python and SQL . Manage data workflows and orchestrate complex dependencies using Airflow and AWS Step Functions. Monitoring, Security & Optimization Ensure data reliability, accuracy, and security across all platforms. Implement monitoring, logging, and alerting for data pipelines using AWS-native and third-party tools. Optimize cost, performance, and scalability of data solutions on AWS. Required Qualifications 9+ years of experience in data engineering or related fields, with at least 2 years in a lead or architect role. Proven experience with: Python and SQL for large-scale data processing. Apache Spark for batch and streaming data. Apache Airflow for workflow orchestration. AWS Cloud Services , including but not limited to: S3, Redshift, EMR, Glue, Athena, Lambda, IAM, CloudWatch Strong understanding of data modeling, distributed systems, and modern data architecture patterns. Excellent leadership, communication, and stakeholder management skills. Preferred Qualifications Experience implementing data platforms using AWS Lakehouse architecture. Familiarity with Docker , Kubernetes , or similar container/orchestration systems. Knowledge of CI/CD and DevOps practices for data engineering. Understanding of data privacy and compliance standards (GDPR, HIPAA, etc.). Show more Show less

Posted 4 days ago

Apply

5.0 - 7.0 years

0 Lacs

India

On-site

Linkedin logo

About the role: As a Data Engineer, you will be instrumental in managing our extensive soil carbon dataset and creating robust data systems. You are expected to be involved in the full project lifecycle, from planning and design, through development, and onto maintenance, including pipelines and dashboards. You’ll interact with Product Managers, Project Managers, Business Development and Operations teams to understand business demands and translate them into technical solutions. Your goal is to provide an organisation-wide source of truth for various downstream activities while also working towards improving and modernising our current platform. Key responsibilities: Design, develop, and maintain scalable data pipelines to process soil carbon and agricultural data Create and optimise database schemas and queries Implement data quality controls and validation processes Adapt existing data flows and schemas to new products and services under development Required qualifications: BS/B. Tech in Computer Science or equivalent practical experience, with 5-7 years as a Data Engineer or similar role. Strong SQL skills and experience optimising complex queries Proficiency with relational databases, preferably MySQL Experience building data pipelines, transformations, and dashboards Ability to troubleshoot and fix performance and data issues across the database Experience with AWS services (especially Glue, S3, RDS) Exposure to big data eco-system – Snowflake/Redshift/Tableau/Looker Python programming skills Excellent written and verbal communication skills in English An ideal candidate would also have: High degree of attention to detail to uncover data discrepancies and fix them Familiarity with geospatial data Experience with scientific or environmental datasets Some understanding of the agritech or environmental sustainability sectors Show more Show less

Posted 4 days ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About Client: Our Client is a global IT services company headquartered in Southborough, Massachusetts, USA. Founded in 1996, with a revenue of $1.8B, with 35,000+ associates worldwide, specializes in digital engineering, and IT services company helping clients modernize their technology infrastructure, adopt cloud and AI solutions, and accelerate innovation. It partners with major firms in banking, healthcare, telecom, and media. Our Client is known for combining deep industry expertise with agile development practices, enabling scalable and cost-effective digital transformation. The company operates in over 50 locations across more than 25 countries, has delivery centers in Asia, Europe, and North America and is backed by Baring Private Equity Asia. Job Title: AWS Data Engineer Location : Pan India Experience : 8-6 Years Job Typ e: Contract to Hire Notice Period : Immediate Joiners Mandatory Skills:, AWS services s3, Lambda, Redshift, Glue,Python,PySpark,SQL Job description: JD: Description - External At Storable, were on a mission to power the future of storage. Our innovative platform helps businesses manage, track, and grow their self-storage operations, and were looking for a Data Manager to join our data-driven team. Storable is committed to leveraging cutting-edge technologies to improve the efficiency, accessibility, and insights derived from data, empowering our team to make smarter decisions and foster impactful growth. As a Data Manager, you will play a pivotal role in overseeing and shaping our data operations, ensuring that our data is organized, accessible, and effectively managed across the organization. You will lead a talented team, work closely with cross-functional teams, and drive the development of strategies to enhance data quality, availability, and security. Key Responsibilities: Lead Data Management Strategy Define and execute the data management vision, strategy, and best practices, ensuring alignment with Storables business goals and objectives. Oversee Data Pipelines: Design, implement, and maintain scalable data pipelines using industry-standard tools to efficiently process and manage large-scale datasets. Ensure Data Quality & Governance, Implement data governance policies and frameworks to ensure data accuracy, consistency, and compliance across the organization. Manage Cross-Functional Collaboration - Partner with engineering, product, and business teams to make data accessible and actionable, and ensure it drives informed decision-making. Optimize Data Infrastructure: Leverage modern data tools and platforms. AWS, Apache Airflow, Apache Iceberg to create an efficient, reliable, and scalable data infrastructure. Monitor & Improve Performance: Mentorship & Leadership Lead and develop a team of data engineers and analysts, fostering a collaborative environment where innovation and continuous improvement are valued Qualifications Proven Expertise in Data Management: Significant experience in managing data infrastructure, data governance, and optimizing data pipelines at scale. Technical Proficiency : Strong hands-on experience with data tools and platforms such as Apache Airflow, Apache Iceberg, and AWS services s3, Lambda, Redshift, Glue Data Pipeline Mastery Familiarity with designing, implementing, and optimizing data pipelines and workflows in Python or other languages for data processing Experience with Data Governance: Solid understanding of data privacy, quality control, and governance best practice Leadership Skills: Ability to lead and mentor teams, influence stakeholders, and drive data initiatives across the organization. Analytical Mindset: Strong problem-solving abilities and a data-driven approach to improving business operations. Excellent Communication: Ability to communicate complex data concepts to both technical and non-technical stakeholders effectively. Bonus Points : Experience with visualization tools Looker, Tableau and reporting frameworks to provide actionable insights. Show more Show less

Posted 4 days ago

Apply

Exploring Redshift Jobs in India

The job market for redshift professionals in India is growing rapidly as more companies adopt cloud data warehousing solutions. Redshift, a powerful data warehouse service provided by Amazon Web Services, is in high demand due to its scalability, performance, and cost-effectiveness. Job seekers with expertise in redshift can find a plethora of opportunities in various industries across the country.

Top Hiring Locations in India

  1. Bangalore
  2. Hyderabad
  3. Mumbai
  4. Pune
  5. Chennai

Average Salary Range

The average salary range for redshift professionals in India varies based on experience and location. Entry-level positions can expect a salary in the range of INR 6-10 lakhs per annum, while experienced professionals can earn upwards of INR 20 lakhs per annum.

Career Path

In the field of redshift, a typical career path may include roles such as: - Junior Developer - Data Engineer - Senior Data Engineer - Tech Lead - Data Architect

Related Skills

Apart from expertise in redshift, proficiency in the following skills can be beneficial: - SQL - ETL Tools - Data Modeling - Cloud Computing (AWS) - Python/R Programming

Interview Questions

  • What is Amazon Redshift and how does it differ from traditional databases? (basic)
  • How does data distribution work in Amazon Redshift? (medium)
  • Explain the difference between SORTKEY and DISTKEY in Redshift. (medium)
  • How do you optimize query performance in Amazon Redshift? (advanced)
  • What is the COPY command in Redshift used for? (basic)
  • How do you handle large data sets in Redshift? (medium)
  • Explain the concept of Redshift Spectrum. (advanced)
  • What is the difference between Redshift and Redshift Spectrum? (medium)
  • How do you monitor and manage Redshift clusters? (advanced)
  • Can you describe the architecture of Amazon Redshift? (medium)
  • What are the best practices for data loading in Redshift? (medium)
  • How do you handle concurrency in Redshift? (advanced)
  • Explain the concept of vacuuming in Redshift. (basic)
  • What are Redshift's limitations and how do you work around them? (advanced)
  • How do you scale Redshift clusters for performance? (medium)
  • What are the different node types available in Amazon Redshift? (basic)
  • How do you secure data in Amazon Redshift? (medium)
  • Explain the concept of Redshift Workload Management (WLM). (advanced)
  • What are the benefits of using Redshift over traditional data warehouses? (basic)
  • How do you optimize storage in Amazon Redshift? (medium)
  • What is the difference between Redshift and Redshift Spectrum? (medium)
  • How do you troubleshoot performance issues in Amazon Redshift? (advanced)
  • Can you explain the concept of columnar storage in Redshift? (basic)
  • How do you automate tasks in Redshift? (medium)
  • What are the different types of Redshift nodes and their use cases? (basic)

Conclusion

As the demand for redshift professionals continues to rise in India, job seekers should focus on honing their skills and knowledge in this area to stay competitive in the job market. By preparing thoroughly and showcasing their expertise, candidates can secure rewarding opportunities in this fast-growing field. Good luck with your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies