Home
Jobs

1965 Redshift Jobs - Page 41

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

10.0 - 15.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Title: Lead Data Engineer Location- All EXL Locations Experience- 10 to 15 Years Job Summary The Lead Data Engineer will provide technical expertise in analysis, design, development, rollout and maintenance of data integration initiatives. This role will contribute to implementation methodologies and best practices, as well as work on project teams to analyze, design, develop and deploy business intelligence / data integration solutions to support a variety of customer needs. This position oversees a team of Data Integration Consultants at various levels, ensuring their success on projects, goals, trainings and initiatives though mentoring and coaching. Provides technical expertise in needs identification, data modelling, data movement and transformation mapping (source to target), automation and testing strategies, translating business needs into technical solutions with adherence to established data guidelines and approaches from a business unit or project perspective whilst leveraging best fit technologies (e.g., cloud, Hadoop, NoSQL, etc.) and approaches to address business and environmental challenges Works with stakeholders to identify and define self-service analytic solutions, dashboards, actionable enterprise business intelligence reports and business intelligence best practices. Responsible for repeatable, lean and maintainable enterprise BI design across organizations. Effectively partners with client team. Leadership not only in the conventional sense, but also within a team we expect people to be leaders. Candidate should elicit leadership qualities such as Innovation, Critical thinking, optimism/positivity, Communication, Time Management, Collaboration, Problem-solving, Acting Independently, Knowledge sharing and Approachable. Responsibilities: Design, develop, test, and deploy data integration processes (batch or real-time) using tools such as Microsoft SSIS, Azure Data Factory , Databricks, Matillion, Airflow, Sqoop, etc. Create functional & technical documentation – e.g. ETL architecture documentation, unit testing plans and results, data integration specifications, data testing plans, etc. Provide a consultative approach with business users, asking questions to understand the business need and deriving the data flow, conceptual, logical, and physical data models based on those needs. Perform data analysis to validate data models and to confirm ability to meet business needs. May serve as project or DI lead, overseeing multiple consultants from various competencies Stays current with emerging and changing technologies to best recommend and implement beneficial technologies and approaches for Data Integration Ensures proper execution/creation of methodology, training, templates, resource plans and engagement review processes Coach team members to ensure understanding on projects and tasks, providing effective feedback (critical and positive) and promoting growth opportunities when appropriate. Coordinate and consult with the project manager, client business staff, client technical staff and project developers in data architecture best practices and anything else that is data related at the project or business unit levels Architect, design, develop and set direction for enterprise self-service analytic solutions, business intelligence reports, visualisations and best practice standards. Toolsets include but not limited to: SQL Server Analysis and Reporting Services, Microsoft Power BI, Tableau and Qlik. Work with report team to identify, design and implement a reporting user experience that is consistent and intuitive across environments, across report methods, defines security and meets usability and scalability best practices. Must have: Writing code in programming language & working experience in Python, Pyspark, Databricks, Scala or Similar Data Pipeline Development & Management Design, develop, and maintain ETL (Extract, Transform, Load) pipelines using AWS services like AWS Glue, AWS Data Pipeline, Lambda, and Step Functions . Implement incremental data processing using tools like Apache Spark (EMR), Kinesis, and Kafka. Work with AWS data storage solutions such as Amazon S3, Redshift, RDS, DynamoDB, and Aurora. Optimize data partitioning, compression, and indexing for efficient querying and cost optimization. Implement data lake architecture using AWS Lake Formation & Glue Catalog. Implement CI/CD pipelines for data workflows using Code Pipeline, Code Build, and GitHub Actions Good to have: Enterprise Data Modelling and Semantic Modelling & working experience in ERwin, ER/Studio, PowerDesigner or Similar Logical/Physical model on Big Data sets or modern data warehouse & working experience in ERwin, ER/Studio, PowerDesigner or Similar Agile Process (Scrum cadences, Roles, deliverables) & basic understanding in either Azure DevOps, JIRA or Similar. Key skills: key Skills: Python, Pyspark, AWS, Databricks, SQL. Show more Show less

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Summary We are seeking a highly experienced and strategic Lead Data Architect with 8+ years of hands-on experience in designing and leading data architecture initiatives. This individual will play a critical role in building scalable, secure, and high-performance data solutions that support enterprise-wide analytics, reporting, and operational systems. The ideal candidate will be both technically proficient and business-savvy, capable of translating complex data needs into innovative architecture designs. Key Responsibilities Design and implement enterprise-wide data architecture to support business intelligence, advanced analytics, and operational data needs. Define and enforce standards for data modeling, integration, quality, and governance. Lead the adoption and integration of modern data platforms (data lakes, data warehouses, streaming, etc.). Develop architecture blueprints, frameworks, and roadmaps aligned with business objectives. Ensure data security, privacy, and regulatory compliance (e.g., GDPR, HIPAA). Collaborate with business, engineering, and analytics teams to deliver high-impact data solutions. Provide mentorship and technical leadership to data engineers and junior architects. Evaluate emerging technologies and provide recommendations for future-state architectures. Required Qualifications 8+ years of experience in data architecture, data engineering, or a similar senior technical role. Bachelor's or Master’s degree in Computer Science, Information Systems, or a related field. Expertise in designing and managing large-scale data systems using cloud platforms (AWS, Azure, or GCP). Strong proficiency in data modeling (relational, dimensional, NoSQL) and modern database systems (e.g., Snowflake, BigQuery, Redshift). Hands-on experience with data integration tools (e.g., Apache NiFi, Talend, Informatica) and orchestration tools (e.g., Airflow). In-depth knowledge of data governance, metadata management, and data cataloging solutions. Experience with real-time and batch data processing frameworks, including streaming technologies like Kafka. Excellent leadership, communication, and cross-functional collaboration skills. Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Join us as a Data Engineering Lead This is an exciting opportunity to use your technical expertise to collaborate with colleagues and build effortless, digital first customer experiences You’ll be simplifying the bank through developing innovative data driven solutions, inspiring to be commercially successful through insight, and keeping our customers’ and the bank’s data safe and secure Participating actively in the data engineering community, you’ll deliver opportunities to support our strategic direction while building your network across the bank We’re recruiting for multiple roles across a range to levels, up to and including experienced managers What you'll do We’ll look to you to demonstrate technical and people leadership to drive value for the customer through modelling, sourcing and data transformation. You’ll be working closely with core technology and architecture teams to deliver strategic data solutions, while driving Agile and DevOps adoption in the delivery of data engineering, leading a team of data engineers. We’ll Also Expect You To Be Working with Data Scientists and Analytics Labs to translate analytical model code to well tested production ready code Helping to define common coding standards and model monitoring performance best practices Owning and delivering the automation of data engineering pipelines through the removal of manual stages Developing comprehensive knowledge of the bank’s data structures and metrics, advocating change where needed for product development Educating and embedding new data techniques into the business through role modelling, training and experiment design oversight Leading and delivering data engineering strategies to build a scalable data architecture and customer feature rich dataset for data scientists Leading and developing solutions for streaming data ingestion and transformations in line with streaming strategy The skills you'll need To be successful in this role, you’ll need to be an expert level programmer and data engineer with a qualification in Computer Science or Software Engineering. You’ll also need a strong understanding of data usage and dependencies with wider teams and the end customer, as well as extensive experience in extracting value and features from large scale data. We'll also expect you to have knowledge of of big data platforms like Snowflake, AWS Redshift, Postgres, MongoDB, Neo4J and Hadoop, along with good knowledge of cloud technologies such as Amazon Web Services, Google Cloud Platform and Microsoft Azure You’ll Also Demonstrate Knowledge of core computer science concepts such as common data structures and algorithms, profiling or optimisation An understanding of machine-learning, information retrieval or recommendation systems Good working knowledge of CICD tools Knowledge of programming languages in data engineering such as Python or PySpark, SQL, Java, and Scala An understanding of Apache Spark and ETL tools like Informatica PowerCenter, Informatica BDM or DEI, Stream Sets and Apache Airflow Knowledge of messaging, event or streaming technology such as Apache Kafka Experience of ETL technical design, automated data quality testing, QA and documentation, data warehousing, data modelling and data wrangling Extensive experience using RDMS, ETL pipelines, Python, Hadoop and SQL Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Company Overview: CashKaro is India’s #1 cashback platform, trusted by over 25 million users! We drive more sales for Amazon, Flipkart, Myntra, and Ajio than any other paid channel, including Google and Meta. Backed by legendary investor Ratan Tata and a recent $16 million boost from Affle, we’re on a rocket ship journey—already surpassing ₹300 crore in revenue and racing towards ₹500 crore. EarnKaro, our influencer referral platform, is trusted by over 500,000 influencers and sends more traffic to leading online retailers than any other platform. Whether it’s micro-influencers or top-tier creators, they choose EarnKaro to monetize their networks. BankKaro, our latest venture, is rapidly becoming India’s go-to FinTech aggregator. Join our dynamic team and help shape the future of online shopping, influencer marketing, and financial technology in India! Role Overview: As a Product Analyst, you will play a pivotal role in enabling data-driven product decisions. You will be responsible for deep-diving into product usage data, building dashboards and reports, optimizing complex queries, and driving feature-level insights that directly influence user engagement, retention, and experience. Key Responsibilities: Feature Usage & Adoption Analysis - Analyze event data to understand feature usage, retention trends, and product interaction patterns across web and app. User Journey & Funnel Analysis - Build funnel views and dashboards to identify drop-offs, friction points, and opportunities for UX or product improvements. Product Usage & Retention Analytics - Analyze user behavior, cohort trends, and retention using Redshift and BigQuery datasets. Partner with Product Managers to design and track core product KPIs. SQL Development & Optimization - Write and optimize complex SQL queries across Redshift and BigQuery. Build and maintain views, stored procedures, and data models for scalable analytics. Dashboarding & BI Reporting - Create and maintain high-quality Power BI dashboards to track DAU/WAU/MAU, feature adoption, engagement %, and drop-off trends. Light Data Engineering - Use Python (Pandas/Numpy) for data cleaning, transformation, and quick exploratory analysis. Business Insight Generation - Translate business questions into structured analyses and insights that inform product and business strategy. Must-Have Skills: Expert-level SQL across Redshift and BigQuery, including performance tuning, window functions, and procedure creation. Strong skills in Power BI (or Tableau) with ability to build actionable, intuitive dashboards. Working knowledge of Python (Pandas) for quick data manipulation and ad-hoc analytics. Deep understanding of product metrics – DAU, retention, feature usage, funnel performance. Strong business acumen – ability to connect data with user behavior and product outcomes. Clear communication and storytelling skills to present data insights to cross-functional teams. Good to Have: Experience with mobile product analytics (Android & iOS). Understanding of funnel, cohort, engagement, and retention metrics. Familiarity with A/B testing tools and frameworks. Experience working with Redshift, Big Query, or cloud-based data pipelines. Certifications in Google Analytics, Firebase, or other analytics platforms. Why Join Us? High Ownership: Drive key metrics for products used by millions. Collaborative Culture: Work closely with founders, product, and tech teams. Competitive Package: Best-in-class compensation, ESOPs, and perks. Great Environment: Hybrid work, medical insurance, lunches, and learning budgets. Ensuring a Diverse and Inclusive workplace where we learn from each other is core to CK's value. CashKaro.com and EarnKaro.com are Equal Employment Opportunity and Affirmative Action Employers. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender perception or identity, national origin, age, marital status, protected veteran status, or disability status. CashKaro.com and EarnKaro.com will not pay any third-party agency or company that does not have a signed agreement with CashKaro.com and EarnKaro.com. Pouring Pounds India Pvt. Ltd. will not pay any third-party agency or company that does not have a signed agreement with CashKaro.com and EarnKaro.com. Visit our Career Page at - https://cashkaro.com/page/careers Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

Job Title: Data Analyst Trainee Location: Remote Job Type: Internship (Full-Time) Duration: 1–3 Months Stipend: ₹25,000/month Department: Data & Analytics Job Summary: We are seeking a motivated and analytical Data Analyst Trainee to join our remote analytics team. This internship is perfect for individuals eager to apply their data skills in real-world projects, generate insights, and support business decision-making through analysis, reporting, and visualization. Key Responsibilities: Collect, clean, and analyze large datasets from various sources Perform exploratory data analysis (EDA) and generate actionable insights Build interactive dashboards and reports using Excel, Power BI, or Tableau Write and optimize SQL queries for data extraction and manipulation Collaborate with cross-functional teams to understand data needs Document analytical methodologies, insights, and recommendations Qualifications: Bachelor’s degree (or final-year student) in Data Science, Statistics, Computer Science, Mathematics, or a related field Proficiency in Excel and SQL Working knowledge of Python (Pandas, NumPy, Matplotlib) or R Understanding of basic statistics and analytical methods Strong attention to detail and problem-solving ability Ability to work independently and communicate effectively in a remote setting Preferred Skills (Nice to Have): Experience with BI tools like Power BI, Tableau, or Google Data Studio Familiarity with cloud data platforms (e.g., BigQuery, AWS Redshift) Knowledge of data storytelling and KPI measurement Previous academic or personal projects in analytics What We Offer: Monthly stipend of ₹25,000 Fully remote internship Mentorship from experienced data analysts and domain experts Hands-on experience with real business data and live projects Certificate of Completion Opportunity for a full-time role based on performance Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Linkedin logo

Job Title: Database Engineer X 8 Positions Location: Hyderabad, India Salary: Market Rate/Negotiable About us Creditsafe is the most used business data provider in the world, reducing risk and maximizing opportunities for our 110,000 business customers. Our journey began in Oslo, Norway in 1997, where we had a dream of using the then revolutionary internet to deliver instant access company credit reports to small and medium-sized businesses. Creditsafe realized this dream and changed the market for the better for businesses of all sizes. From there, we opened 15 more offices throughout Europe, the USA and Asia. We provide data on more than 300 million companies and provide customer notifications for billions of changes annually. We are a high growth company offering the freedom and flexibility of a start-up type culture due to the continuous innovation and new product development performed, coupled with the stability of being a profitable and growing company! With such a large customer base and breadth of data and analytics technology you will have real opportunities to help companies survive and thrive in challenging times by reducing business risk and choosing trustworthy customers and suppliers. Summary: This is your opportunity to develop your career with an exciting, fast paced and rapidly expanding business, one of the leading providers of Business Intelligence worldwide. As a Database Engineer with excellent database development skills, you will be responsible for developing and maintaining the databases and scripts that power the company’s products and websites, handling large data sets and having more than 20 million hits per day. You will work with your team to deliver work on time, in-line with the business requirements, and to a high level of quality This is your opportunity to develop your career with an exciting, fast paced and rapidly expanding business, one of the leading providers of Business Intelligence worldwide. You will work with your team to deliver work on time, in-line with the business requirements, and to a high level of quality. Primary Responsibilities: · 5+ year’s solid commercial experience of Oracle development under a 10g or 11g environment. · Advanced PL/SQL knowledge required. · ETL skills – Pentaho would be beneficial · Any wider DB experience would be desirable e.g., Redshift, Aurora DB, DynamoDB, MariaDB, MongoDB etc. · Cloud/AWS An interest in learning new technologies. · Experience in tuning Oracle queries in large databases. · Good experience in loading and extracting large data sets. · Experience of working with an Oracle database under a bespoke web development environment. · Analytical and critical thinking skills; agile problem-solving abilities. · Detail oriented, self-motivated, able to work independently with little or no supervision, and is committed to the highest standards of quality for the entire release process. · Excellent written and verbal communication skills. · Attention to detail. · Ability to work in a fast paced / changing environment. · Ability to thrive in a deadline driven, stressful project environment.3+ years of software development experience. Qualifications and Experience · Degree in Computer Science or similar. · Experience with loading data through SSIS. · Experience working on financial and business intelligence projects or in big data environments. · A desire to learn new skills and branch into development using a wide range of alternative technologies. Skills, Knowledge and Abilities · Write code for new development requirements as well as provide bug fixing, support and maintenance of existing code. · Test your code to ensure it functions as per the business requirements, considering the impact of your code on other areas of the solution. · Provide expert advice on performance tuning within Oracle. · Perform large-scale imports and extracts of data. · Assist the business in the collection and documentation of user's requirements where needed, provide estimates and work plans · Create and maintain technical documentation. · Follow all company procedures/standards/processes. · Contribute to architectural design and development making technically sound development recommendations. · Provide support to other staff in the department and act as a mentor to less experienced staff, including through code reviews. · Work as a team player in an agile environment. · Build release scripts and plans to facilitate the deployment of your code to testing and production environments. · Take ownership of any issues that occur within your area to ensure an appropriate solution is found. Assess opportunities for application and process improvement and share with team members and/or affected parties. Company Benefits: Competitive Salary Work from Home Pension Medical Insurance Cab facility for Women Dedicated Gaming Area Show more Show less

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Mumbai, Maharashtra, India

Remote

Linkedin logo

Do you want to make a global impact on patient health? Join Pfizer Digital’s Artificial Intelligence, Data, and Advanced Analytics organization (AIDA) to leverage cutting-edge technology for critical business decisions and enhance customer experiences for colleagues, patients, and physicians. Our team is at the forefront of Pfizer’s transformation into a digitally driven organization, using data science and AI to change patients’ lives. The Data Science Industrialization team leads engineering efforts to advance AI and data science applications from POCs and prototypes to full production. As a Senior Manager, AI and Analytics Data Engineer, you will be part of a global team responsible for designing, developing, and implementing robust data layers that support data scientists and key advanced analytics/AI/ML business solutions. You will partner with cross-functional data scientists and Digital leaders to ensure efficient and reliable data flow across the organization. You will lead development of data solutions to support our data science community and drive data-centric decision-making. Join our diverse team in making an impact on patient health through the application of cutting-edge technology and collaboration. Role Responsibilities Lead development of data engineering processes to support data scientists and analytics/AI solutions, ensuring data quality, reliability, and efficiency As a data engineering tech lead, enforce best practices, standards, and documentation to ensure consistency and scalability, and facilitate related trainings Provide strategic and technical input on the AI ecosystem including platform evolution, vendor scan, and new capability development Act as a subject matter expert for data engineering on cross functional teams in bespoke organizational initiatives by providing thought leadership and execution support for data engineering needs Train and guide junior developers on concepts such as data modeling, database architecture, data pipeline management, data ops and automation, tools, and best practices Stay updated with the latest advancements in data engineering technologies and tools and evaluate their applicability for improving our data engineering capabilities Direct data engineering research to advance design and development capabilities Collaborate with stakeholders to understand data requirements and address them with data solutions Partner with the AIDA Data and Platforms teams to enforce best practices for data engineering and data solutions Demonstrate a proactive approach to identifying and resolving potential system issues. Communicate the value of reusable data components to end-user functions (e.g., Commercial, Research and Development, and Global Supply) and promote innovative, scalable data engineering approaches to accelerate data science and AI work Basic Qualifications Bachelor's degree in computer science, information technology, software engineering, or a related field (Data Science, Computer Engineering, Computer Science, Information Systems, Engineering, or a related discipline). 7+ years of hands-on experience in working with SQL, Python, object-oriented scripting languages (e.g. Java, C++, etc..) in building data pipelines and processes. Proficiency in SQL programming, including the ability to create and debug stored procedures, functions, and views. Recognized by peers as an expert in data engineering with deep expertise in data modeling, data governance, and data pipeline management principles In-depth knowledge of modern data engineering frameworks and tools such as Snowflake, Redshift, Spark, Airflow, Hadoop, Kafka, and related technologies Experience working in a cloud-based analytics ecosystem (AWS, Snowflake, etc.) Familiarity with machine learning and AI technologies and their integration with data engineering pipelines Demonstrated experience interfacing with internal and external teams to develop innovative data solutions Strong understanding of Software Development Life Cycle (SDLC) and data science development lifecycle (CRISP) Highly self-motivated to deliver both independently and with strong team collaboration Ability to creatively take on new challenges and work outside comfort zone. Strong English communication skills (written & verbal) Preferred Qualifications Advanced degree in Data Science, Computer Engineering, Computer Science, Information Systems, or a related discipline (preferred, but not required) Experience in software/product engineering Experience with data science enabling technology, such as Dataiku Data Science Studio, AWS SageMaker or other data science platforms Familiarity with containerization technologies like Docker and orchestration platforms like Kubernetes. Experience working effectively in a distributed remote team environment Hands on experience working in Agile teams, processes, and practices Expertise in cloud platforms such as AWS, Azure or GCP. Proficiency in using version control systems like Git. Pharma & Life Science commercial functional knowledge Pharma & Life Science commercial data literacy Ability to work non-traditional work hours interacting with global teams spanning across the different regions (e.g.: North America, Europe, Asia) Pfizer is an equal opportunity employer and complies with all applicable equal employment opportunity legislation in each jurisdiction in which it operates. Information & Business Tech Show more Show less

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

About Calfus Calfus is a Silicon Valley headquartered software engineering and platforms company. The name Calfus finds its roots and ethos in the Olympic motto “Citius, Altius, Fortius – Communiter". Calfus seeks to inspire our team to rise faster, higher, stronger, and work together to build software at speed and scale. Our core focus lies in creating engineered digital solutions that bring about a tangible and positive impact on business outcomes. We stand for #Equity and #Diversity in our ecosystem and society at large. Connect with us at #Calfus and be a part of our extraordinary journey! Position Overview: As a Data Engineer – BI Analytics & DWH , you will play a pivotal role in designing and implementing comprehensive business intelligence solutions that empower our organization to make data-driven decisions. You will leverage your expertise in Power BI, Tableau, and ETL processes to create scalable architectures and interactive visualizations. This position requires a strategic thinker with strong technical skills and the ability to collaborate effectively with stakeholders at all levels. Key Responsibilities:  BI Architecture & DWH Solution Design: Develop and design scalable BI Analytical & DWH Solution that meets business requirements, leveraging tools such as Power BI and Tableau.  Data Integration: Oversee the ETL processes using SSIS to ensure efficient data extraction, transformation, and loading into data warehouses.  Data Modelling: Create and maintain data models that support analytical reporting and data visualization initiatives.  Database Management: Utilize SQL to write complex queries, stored procedures, and manage data transformations using joins and cursors.  Visualization Development: Lead the design of interactive dashboards and reports in Power BI and Tableau, adhering to best practices in data visualization.  Collaboration: Work closely with stakeholders to gather requirements and translate them into technical specifications and architecture designs.  Performance Optimization: Analyse and optimize BI solutions for performance, scalability, and reliability.  Data Governance: Implement best practices for data quality and governance to ensure accurate reporting and compliance.  Team Leadership: Mentor and guide junior BI developers and analysts, fostering a culture of continuous learning and improvement.  Azure Databricks: Leverage Azure Databricks for data processing and analytics, ensuring seamless integration with existing BI solutions. Qualifications :  Bachelor’s degree in computer science, Information Systems, Data Science, or a related field.  6-15+ years of experience in BI architecture and development, with a strong focus on Power BI and Tableau.  Proven experience with ETL processes and tools, especially SSIS. Strong proficiency in SQL Server, including advanced query writing and database management.  Exploratory data analysis with Python  Familiarity with the CRISP-DM model  Ability to work with di􀆯erent data models like  Familiarity with databases like Snowflake, Postgres, Redshift & Mongo DB  Experience with visualization tools such as Power BI, Quick sight and Plotly and or Dash  Strong programming foundation with Python with versatility to handle as : Data Manipulation and Analysis: using Pandas, NumPy & PySpark Data serialization & formats like JSON, CSV and Parquet & Pickle Database interaction to query cloud-based data warehouses Data Pipeline and ETL tools like Airflow for orchestrating workflows and, managing ETL pipelines: Scripting and automation . Cloud services & tools such as S3, AWS Lambda to manage cloud infrastructure. Azure SDK is a plus  Code quality and management using version control and collaboration in data engineering projects  Ability to interact with REST API’s and perform web scraping tasks is a plus Calfus Inc. is an Equal Opportunity Employer. That means we do not discriminate against any applicant for employment, or any employee because of age, colour, sex, disability, national origin, race, religion, or veteran status. All employment is decided based on qualifications, merit, and business need. Show more Show less

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Job Summary: Analyzes the data needs of enterprise to build, optimize and maintain conceptual ML / Analytics models. Data scientist provides expertise in modeling & statistical approaches ranging from regression methods, decision trees, deep learning, NLP techniques, uplift modeling; statistical modeling such as multivariate techniques. Roles & Responsibilities : Design ML design and Ops stack considering the various trade-offs. Statistical Analysis and fundamentals MLOPS frameworks design and implementation Model Evaluation best practices -Train and retrain systems when necessary. Extend existing ML libraries and frameworks -Keep abreast of developments in the field. Act as a SME and tech lead / veteran for any data engineering question and manage data scientists and influence DS development across the company. Promote services, contribute to the identification of innovative initiatives within the Group, share information on new technologies in dedicated internal communities. Ensure compliance with policies related to Data Management and Data Protection Preferred Experience: Strong experience (3+ years) with Building statistical models, applying machine learning techniques Experience (3+ years) on Big Data technologies such as Hadoop, Spark, Airflow/Databricks Proven experience (3+ years) in solving complex problems with multi-layered data sets, as well as optimizing existing machine learning libraries and frameworks. Proven experience (3+ years) on innovation implementation from exploration to production: these may include containerization (i.e. Docker/Kubernetes), Big data (Hadoop, Spark) and MLOps platforms. Deep understanding of E2E software development in a team, and a track record of shipping software on time Ensure high-quality data and understand how data, which is generated out experimental design can produce actionable, trustworthy conclusions. Proficiency with SQL and NoSQL databases, data warehousing concepts, and cloud-based analytics database (e.g. Snowflake , Databricks or Redshift) administration Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Join us as a Data Engineering Lead This is an exciting opportunity to use your technical expertise to collaborate with colleagues and build effortless, digital first customer experiences You’ll be simplifying the bank through developing innovative data driven solutions, inspiring to be commercially successful through insight, and keeping our customers’ and the bank’s data safe and secure Participating actively in the data engineering community, you’ll deliver opportunities to support our strategic direction while building your network across the bank We’re recruiting for multiple roles across a range to levels, up to and including experienced managers What you'll do We’ll look to you to demonstrate technical and people leadership to drive value for the customer through modelling, sourcing and data transformation. You’ll be working closely with core technology and architecture teams to deliver strategic data solutions, while driving Agile and DevOps adoption in the delivery of data engineering, leading a team of data engineers. We’ll Also Expect You To Be Working with Data Scientists and Analytics Labs to translate analytical model code to well tested production ready code Helping to define common coding standards and model monitoring performance best practices Owning and delivering the automation of data engineering pipelines through the removal of manual stages Developing comprehensive knowledge of the bank’s data structures and metrics, advocating change where needed for product development Educating and embedding new data techniques into the business through role modelling, training and experiment design oversight Leading and delivering data engineering strategies to build a scalable data architecture and customer feature rich dataset for data scientists Leading and developing solutions for streaming data ingestion and transformations in line with streaming strategy The skills you'll need To be successful in this role, you’ll need to be an expert level programmer and data engineer with a qualification in Computer Science or Software Engineering. You’ll also need a strong understanding of data usage and dependencies with wider teams and the end customer, as well as extensive experience in extracting value and features from large scale data. We'll also expect you to have knowledge of of big data platforms like Snowflake, AWS Redshift, Postgres, MongoDB, Neo4J and Hadoop, along with good knowledge of cloud technologies such as Amazon Web Services, Google Cloud Platform and Microsoft Azure You’ll Also Demonstrate Knowledge of core computer science concepts such as common data structures and algorithms, profiling or optimisation An understanding of machine-learning, information retrieval or recommendation systems Good working knowledge of CICD tools Knowledge of programming languages in data engineering such as Python or PySpark, SQL, Java, and Scala An understanding of Apache Spark and ETL tools like Informatica PowerCenter, Informatica BDM or DEI, Stream Sets and Apache Airflow Knowledge of messaging, event or streaming technology such as Apache Kafka Experience of ETL technical design, automated data quality testing, QA and documentation, data warehousing, data modelling and data wrangling Extensive experience using RDMS, ETL pipelines, Python, Hadoop and SQL Show more Show less

Posted 2 weeks ago

Apply

2.0 - 5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

We’re Hiring: Senior Associate – Field Force Operations at Chryselys Location: Hyderabad Job Type: Full-time About Us: Chryselys is a Pharma Analytics & Business consulting company that delivers data-driven insights leveraging AI-powered, cloud-native platforms to achieve high-impact transformations. We specialize in digital technologies and advanced data science techniques that provide strategic and operational insights. Who we are: People - Our team of industry veterans, advisors and senior strategists have diverse backgrounds and have worked at top tier companies. Quality - Our goal is to deliver the value of a big five consulting company without the big five cost. Technology - Our solutions are Business centric built on cloud native technologies. Role Overview: As a Field Force Operations Senior Associate at Chryselys, you will leverage your expertise in commercial model design, sales force sizing, territory alignment, and deployment to optimize field force operations and processes. You will work closely with cross-functional teams, including client stakeholders and analytics experts, to define execution KPIs, maximize sales impact, and deliver actionable insights through advanced reporting and dashboards. Your role will also involve segmentation and targeting, incentive compensation processes, and planning for call activities and non-personal promotions. With hands-on experience in tools like Qlik, Power BI, and Tableau, along with technologies such as SQL, you will ensure impactful storytelling and effective stakeholder management while supporting clients across the U.S. and Europe. Key Responsibilities: Capabilities and experience in field force operations and processes related to commercial model design and structure, sales force sizing and optimization, Territory alignment and deployment Good understanding of commercial operations and analytics as a domain Expertise with SF/FF datasets for creating dashboards and reports for multiple user personas Ability to define FF execution and measurement KPIs to maximize sales impact Understanding and expertise in call activity planning, non-personal promotions Good knowledge of segmentation & targeting and incentive compensation processes Hands-on experience with tools like Qlik/Power BI/Tableau and technologies like Python/SQL Stakeholder management abilities and storytelling skills Experience in working with pharma clients across US and Europe What You Bring: · Education: Bachelor's or master's degree in data science, statistics, computer science, engineering, or a related field with a strong academic record. · Experience: 2-5 years of experience in field force operations, particularly in the pharmaceutical or healthcare industry, working with key datasets · Skills: § Strong experience with SQL and cloud-based data processing environments such as AWS (Redshift, Athena, S3) § Demonstrated ability to build data visualizations and communicate insights through tools like PowerBI, Tableau, Qlik, QuickSight, Javelin or similar. § Strong analytical skills, with experience in analogue analysis § Ability to manage multiple projects, prioritize tasks, and meet deadlines in a fast-paced environment. § Excellent communication and presentation skills, with the ability to explain complex data science concepts to non-technical stakeholders. § A strong problem-solving mindset, with the ability to adapt and innovate in a dynamic consulting environment. Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Kochi, Kerala, India

Remote

Linkedin logo

🚨 We’re Hiring | Senior Data Analyst 🚨 📍 Location: Trivandrum / Kochi / Remote 🕒 Notice Period: Immediate Joiners Only 💰 Budget: Up to ₹19 LPA 📊 Experience: 5+ Years 🌐 Preference: Keralites only Are you a data-driven problem solver with a passion for analytics and business intelligence? We’re on the lookout for a Senior Data Analyst to join our growing Data & Analytics team! ✅ Must-Have Skills 🔹 SQL , Power BI , and Python 🔹 Experience with Amazon Athena and relational databases like SQL Server, Redshift, or Snowflake 🔹 Knowledge of data modeling , ETL , and data architecture 🔹 Strong data storytelling and visualization capabilities 🔹 Excellent communication and stakeholder management skills 🧠 Key Responsibilities Analyze large and complex datasets to drive actionable insights Build compelling dashboards and reports in Power BI Collaborate with business and technical teams Maintain data quality, consistency, and accuracy Mentor and guide a team of data engineers 🔎 We are giving preference to candidates from Kerala who are ready to join immediately and thrive in a fast-paced, collaborative environment. #Hiring #SeniorDataAnalyst #PowerBI #SQL #Python #KeralitesPreferred #KeralaJobs #TrivandrumJobs #KochiJobs #ImmediateJoiners #RemoteJobs #DataAnalytics #BusinessIntelligence Show more Show less

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

This role is for one of the Weekday's clients Min Experience: 8 years Location: Bangalore, Mumbai JobType: full-time We are seeking a highly experienced and motivated Lead Data Engineer to join our data engineering team. This role is perfect for someone with 8-10 years of hands-on experience in designing and building scalable data infrastructure, data pipelines, and high-performance data platforms. You will lead a team of engineers, set data engineering standards, and work cross-functionally with data scientists, analysts, and software engineers to enable a data-driven culture within the organization. Requirements Key Responsibilities: Technical Leadership: Lead the design and development of robust, scalable, and high-performance data architectures, including batch and real-time data pipelines using modern technologies. Data Pipeline Development: Architect, implement, and maintain complex ETL/ELT workflows using tools like Apache Airflow, Spark, Kafka, or similar. Data Warehouse Management: Design and maintain cloud-based data warehouses and data lakes (e.g., Snowflake, Redshift, BigQuery, Delta Lake), ensuring optimized storage and query performance. Data Quality and Governance: Implement data validation, monitoring, and governance processes to ensure data accuracy, completeness, and security across all platforms. Collaboration: Work closely with stakeholders, including business analysts, data scientists, and application developers, to understand data needs and deliver effective solutions. Mentorship and Team Management: Guide and mentor junior and mid-level data engineers, foster best practices in code, architecture, and agile delivery. Automation and CI/CD: Develop and manage data pipeline deployment processes using DevOps and CI/CD principles. Required Skills & Qualifications: 8-10 years of proven experience in data engineering or a related field. Strong programming skills in Python, Scala, or Java. Expertise in building scalable and fault-tolerant ETL/ELT processes using frameworks such as Apache Spark, Kafka, Airflow, or similar. Hands-on experience with cloud platforms (AWS, GCP, or Azure) and tools like S3, Redshift, Snowflake, BigQuery, Glue, EMR, or Databricks. In-depth understanding of relational and NoSQL databases (PostgreSQL, MongoDB, Cassandra, etc.). Strong SQL skills with the ability to write complex and optimized queries. Familiarity with data modeling, data warehousing concepts, and OLAP/OLTP systems. Experience in deploying data services using containerization (Docker, Kubernetes) and CI/CD tools like Jenkins, GitHub Actions, or similar. Excellent communication skills with a collaborative and proactive attitude. Preferred Qualifications: Experience working in fast-paced, agile environments or startups. Exposure to machine learning pipelines, MLOps, or real-time analytics. Familiarity with data governance frameworks and data privacy regulations (GDPR, CCPA) Show more Show less

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

We are seeking a highly experienced Senior Analyst to help guide us in our quest with our global, regional, and functional commercial policy implementation, reporting & governance projects. This successful candidate will contribute by building metrics, analyzing processes, workflows, and systems with the objective of identifying opportunities for either improvement or automation. Our ideal candidate is comfortable working with all levels of management to gain an in-depth understanding of our strategy and improving customer experience. This role requires close collaboration with product, segment partners, product marketing, customer to cash, sales, marketing, technology, and finance areas. This position resides in the Commercial Excellence organization and reports to the Manager of Commercial Policy Reporting & Governance. About The Role In this role as a Senior Analyst Commercial Policy Reporting & Governance, you will: Improve, execute, and effectively communicate significant analyses that identifies meaningful trends and opportunities across the business. Participate in regular meetings with stakeholders & management, assessing and addressing issues to identify and implement improvements toward efficient operations. Provide strong and timely business analytic support to business partners and various organizational stakeholders. Develop actionable road maps for improving workflows and processes. Effectively work with partners across the business to develop processes for capturing project activity, creating metrics driven dashboards for specific use cases, behaviors and evaluating the data for process improvement recommendations. Collaborate with Project Leads, Managers, and Business partners to determine schedules and project timelines ensuring alignments across all areas of the business. Drive commercial strategy and policy alignment with fast changing attributes, while managing reporting, tracking and governance best practices. Identify, assess, manage, and communicate risks while laying out mitigation plan and course corrections where appropriate. Provide insightful diagnostics and actionable insights to the leadership team in a proactive manner by spotting trends, questioning data and asking questions to understand underlying drivers. Proactively identify trends for future governance & reporting needs while presenting ideas to CE Leadership for new areas of opportunity to drive value. Prepare, analyze, and summarize various weekly, monthly, and periodic operational results for use by various key stakeholders, creating reports, specifications, instructions, and flowcharts. Conduct full lifecycle of analytics projects, including pulling, manipulating, and exporting data from project requirements documentation to design and execution. About You You’re a fit for the role of Senior Analyst Commercial Policy Reporting & Governance, if your background includes: Bachelor’s degree required, preferably in Computer Science, Mathematics, Business management, or economics. 4 to 6+ years of professional experience in a similar role. The role requires the candidate to work from 2 pm - 11 pm IST. Willing to work in hybrid mode, Work from Office Twice a week. Proven project management skills related planning and overseeing projects from the initial ideation through to completion. Proven ability to take complex and disparate data sets and create streamlined and efficient data lakes with connected and routinized cadence. Advanced level skills in the following systems: Power BI, Snowflake, Redshift, Salesforce.com, EDW, Excel, MS PowerPoint, and Alteryx/similar middleware data transformation tools. Familiarity with contract lifecycle management tools like Conga CLM, HighQ CLM etc. Ability to quickly draw insights into trends in data and make recommendations to drive productivity and efficiency. Exceptional verbal, written, and visual communication skills Experience managing multiple projects simultaneously within a matrix organization, adhering to deadlines in a fast-paced environment Ability to deploy influencing techniques to drive cross-functional alignment and change across broad audience Ability to be flexible with working hours to support ever-changing demands of the business What’s in it For You? Hybrid Work Model: We’ve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrow’s challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits: We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our values: Obsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact: Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. About Us Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound exciting? Join us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation here. Learn more on how to protect yourself from fraudulent job postings here. More information about Thomson Reuters can be found on thomsonreuters.com. Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Andhra Pradesh, India

On-site

Linkedin logo

Skill: Data Engineer Role: T3, T2 Key Responsibility Data Engineer Must have 5+ years of experience in below mentioned skills. Must Have: Big Data Concepts , Python(Core Python- Able to write code), SQL, Shell Scripting, AWS S3 Good to Have: Event-driven/AWA SQS, Microservices, API Development, Kafka, Kubernetes, Argo, Amazon Redshift, Amazon Aurora Show more Show less

Posted 2 weeks ago

Apply

4.0 - 6.0 years

0 Lacs

Hyderabad, Telangana

On-site

Indeed logo

Location Hyderabad, Telangana, India Category Accounting / Finance Careers Job Id JREQ188357 Job Type Full time Hybrid We are seeking a highly experienced Senior Analyst to help guide us in our quest with our global, regional, and functional commercial policy implementation, reporting & governance projects. This successful candidate will contribute by building metrics, analyzing processes, workflows, and systems with the objective of identifying opportunities for either improvement or automation. Our ideal candidate is comfortable working with all levels of management to gain an in-depth understanding of our strategy and improving customer experience. This role requires close collaboration with product, segment partners, product marketing, customer to cash, sales, marketing, technology, and finance areas. This position resides in the Commercial Excellence organization and reports to the Manager of Commercial Policy Reporting & Governance. About the Role In this role as a Senior Analyst Commercial Policy Reporting & Governance, you will: Improve, execute, and effectively communicate significant analyses that identifies meaningful trends and opportunities across the business. Participate in regular meetings with stakeholders & management, assessing and addressing issues to identify and implement improvements toward efficient operations. Provide strong and timely business analytic support to business partners and various organizational stakeholders. Develop actionable road maps for improving workflows and processes. Effectively work with partners across the business to develop processes for capturing project activity, creating metrics driven dashboards for specific use cases, behaviors and evaluating the data for process improvement recommendations. Collaborate with Project Leads, Managers, and Business partners to determine schedules and project timelines ensuring alignments across all areas of the business. Drive commercial strategy and policy alignment with fast changing attributes, while managing reporting, tracking and governance best practices. Identify, assess, manage, and communicate risks while laying out mitigation plan and course corrections where appropriate. Provide insightful diagnostics and actionable insights to the leadership team in a proactive manner by spotting trends, questioning data and asking questions to understand underlying drivers. Proactively identify trends for future governance & reporting needs while presenting ideas to CE Leadership for new areas of opportunity to drive value. Prepare, analyze, and summarize various weekly, monthly, and periodic operational results for use by various key stakeholders, creating reports, specifications, instructions, and flowcharts. Conduct full lifecycle of analytics projects, including pulling, manipulating, and exporting data from project requirements documentation to design and execution. About You You’re a fit for the role of Senior Analyst Commercial Policy Reporting & Governance, if your background includes: Bachelor’s degree required, preferably in Computer Science, Mathematics, Business management, or economics. 4 to 6+ years of professional experience in a similar role. The role requires the candidate to work from 2 pm - 11 pm IST. Willing to work in hybrid mode, Work from Office Twice a week. Proven project management skills related planning and overseeing projects from the initial ideation through to completion. Proven ability to take complex and disparate data sets and create streamlined and efficient data lakes with connected and routinized cadence. Advanced level skills in the following systems: Power BI, Snowflake, Redshift, Salesforce.com, EDW, Excel, MS PowerPoint, and Alteryx/similar middleware data transformation tools. Familiarity with contract lifecycle management tools like Conga CLM, HighQ CLM etc. Ability to quickly draw insights into trends in data and make recommendations to drive productivity and efficiency. Exceptional verbal, written, and visual communication skills Experience managing multiple projects simultaneously within a matrix organization, adhering to deadlines in a fast-paced environment Ability to deploy influencing techniques to drive cross-functional alignment and change across broad audience Ability to be flexible with working hours to support ever-changing demands of the business #LI-GS2 What’s in it For You? Hybrid Work Model: We’ve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrow’s challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits: We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our values: Obsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact: Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. About Us Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound exciting? Join us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law.

Posted 2 weeks ago

Apply

0.0 - 3.0 years

0 Lacs

Delhi, Delhi

On-site

Indeed logo

Full time | Work From Office This Position is Currently Open Department / Category: DEVELOPER Listed on Jun 03, 2025 Work Location: NEW DELHI Job Descritpion of Data Bricks Developer 7+ Years Relevant Experience More than 3 years in data integration, pipeline development, and data warehousing, with a strong focus on AWS Databricks. Job Responsibilities: Administer, manage, and optimize the Databricks environment to ensure efficient data processing and pipeline development Perform advanced troubleshooting, query optimization, and performance tuning in a Databricks environment Collaborate with development teams to guide, optimize, and refine data solutions within the Databricks ecosystem Ensure high performance in data handling and processing, including the optimization of Databricks jobs and clusters Engage with and support business teams to deliver data and analytics projects effectively Manage source control systems and utilize Jenkins for continuous integration Actively participate in the entire software development lifecycle, focusing on data integrity and efficiency within Databricks Technical Skills: Proficiency in Databricks platform, management, and optimization Strong experience in AWS Cloud, particularly in data engineering and administration, with expertise in Apache Spark, S3, Athena, Glue, Kafka, Lambda, Redshift, and RDS Proven experience in data engineering performance tuning and analytical understanding in business and program contexts Solid experience in Python development, specifically in PySpark within the AWS Cloud environment, including experience with Terraform Knowledge of databases (Oracle, SQL Server, PostgreSQL, Redshift, MySQL, or similar) and advanced database querying Experience with source control systems (Git, Bitbucket) and Jenkins for build and continuous integration Understanding of continuous deployment (CI/CD) processes Experience with Airflow and additional Apache Spark knowledge is advantageous Exposure to ETL tools, including Informatica Required Skills for Data Bricks Developer Job AWS Databricks Databases CI/CD Constrol systems Our Hiring Process Screening (HR Round) Technical Round 1 Technical Round 2 Final HR Round

Posted 2 weeks ago

Apply

0.0 years

0 Lacs

Gurugram, Haryana

On-site

Indeed logo

Location Gurugram, Haryana, India This job is associated with 2 categories Job Id GGN00002056 Information Technology Job Type Full-Time Posted Date 06/04/2025 Achieving our goals starts with supporting yours. Grow your career, access top-tier health and wellness benefits, build lasting connections with your team and our customers, and travel the world using our extensive route network. Come join us to create what’s next. Let’s define tomorrow, together. Description United's Digital Technology team is comprised of many talented individuals all working together with cutting-edge technology to build the best airline in the history of aviation. Our team designs, develops and maintains massively scaling technology solutions brought to life with innovative architectures, data analytics, and digital solutions. Job overview and responsibilities United Airlines is seeking talented people to join the Data Engineering Operations team. Key responsibilities include configuring and managing infrastructure, implementing continuous integration/continuous deployment (CI/CD) pipelines, and optimizing system performance. You will work to improve efficiency, enhance scalability, and ensure the reliability of systems through monitoring and proactive measures. Collaboration, scripting, and proficiency in tools for version control and automation are critical skills for success in this role. We are seeking creative, driven, detail-oriented individuals who enjoy tackling tough problems with data and insights. Individuals who have a natural curiosity and desire to solve problems are encouraged to apply . Collaboration, scripting, and proficiency in tools for version control and automation are critical skills for success in this role. Translate product strategy and requirements into suitable, maintainable and scalable solution design according to existing architecture guardrails Collaborate with development and operations teams to understand project requirements and design effective DevOps solutions Implement and maintain CI/CD pipelines for automated software builds, testing, and deployment Manage and optimize cloud-based infrastructure to ensure scalability, security, and performance Implement and maintain monitoring and alerting systems for proactive issue resolution Work closely with cross-functional teams to troubleshoot and resolve infrastructure-related issues Automate repetitive tasks and processes to improve efficiency and reduce manual intervention Key Responsibilities: Design, deploy, and maintain cloud infrastructure on AWS. Set up and manage Kubernetes clusters for container orchestration. Design, implement, and manage scalable, secure, and highly available AWS infrastructure using Terraform. Develop and manage Infrastructure as Code (IaC) modules and reusable components. Collaborate with developers, architects, and other DevOps engineers to design cloud-native applications and deployment strategies. Manage and optimize CI/CD pipelines using tools like GitHub Actions, GitLab CI, Jenkins, or similar. Manage and optimize Databricks platform. Monitor infrastructure health and performance using AWS CloudWatch, Prometheus, Grafana, etc. Ensure cloud security best practices, including IAM policies, VPC configurations, data encryption, and secrets management. Create and manage networking infrastructure such as VPCs, subnets, security groups, route tables, NAT gateways, etc. Handle deployment and configuration of services such as EC2, RDS, Glue, S3, ECS/EKS, Lambda, API Gateway, Kinesis, MWAA, DynamoDB, CloudFront, Route 53, SQS,SNS, Athena, ELB/ALB. Maintain logging, alerting, and monitoring systems to ensure reliability and performance. This position is offered on local terms and conditions. Expatriate assignments and sponsorship for employment visas, even on a time-limited visa status, will not be awarded. This position is for United Airlines Business Services Pvt. Ltd - a wholly owned subsidiary of United Airlines Inc. Qualifications What’s needed to succeed (Minimum Qualifications): Bachelor's degree in Computer Science, Engineering, or related field 5+ years of IT experience in Experience as a DevOps Engineer or in a similar role. Experience with AWS infrastructure designs, implementation, and support Proficiency in scripting languages (e.g., Bash, Python) and configuration management tools Experience with database systems like Postgress, Redshift, Mysql. Must be legally authorized to work in India for any employer without sponsorship Must be fluent in English (written and spoken) Successful completion of interview required to meet job qualification Reliable, punctual attendance is an essential function of the position What will help you propel from the pack (Preferred Qualifications): Master’s in computer science or related STEM field Strong experience with continuous integration & delivery using Agile methodologies DevOps experience with transportation/airline industry Knowledge of security best practices in a DevOps environment Experience with logging and monitoring tools (e.g., Dynatrace / Datadog ) Strong problem-solving and communication skills Experience with Harness tools Experience with microservices architecture and serverless applications. Knowledge of database technologies (PostgreSQL, Redshift,Mysql). Knowledge of security best practices in a DevOps environment AWS certifications (e.g., AWS Certified Solutions Architect, AWS Certified Developer). Databricks Platform certifications.

Posted 2 weeks ago

Apply

0.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Indeed logo

- 3+ years of data engineering experience - Experience with data modeling, warehousing and building ETL pipelines Amazon’s Consumer Payments organization is seeking a highly quantitative, experienced Data Engineer to drive growth through analytics, automation of data pipelines, and enhancement of self-serve experiences. . You will succeed in this role if you are an organized self-starter who can learn new technologies quickly and excel in a fast-paced environment. In this position, you will be a key contributor and sparring partner, developing analytics and insights that global executive management teams and business leaders will use to define global strategies and deep dive businesses. You will be part the team that is focused on acquiring new merchants from around the world to payments around the world. The position is based in India but will interact with global leaders and teams in Europe, Japan, US, and other regions. You should be highly analytical, resourceful, customer focused, team oriented, and have an ability to work independently under time constraints to meet deadlines. You will be comfortable thinking big and diving deep. A proven track record in taking on end-to-end ownership and successfully delivering results in a fast-paced, dynamic business environment is strongly preferred. Responsibilities include but not limited to: - Design, develop, implement, test, and operate large-scale, high-volume, high-performance data structures for analytics and Reporting. - Implement data structures using best practices in data modeling, ETL/ELT processes, and SQL, AWS – Redshift, and OLAP technologies, Model data and metadata for ad hoc and pre-built reporting. - Work with product tech teams and build robust and scalable data integration (ETL) pipelines using SQL, Python and Spark. - Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers. - Interface with business customers, gathering requirements and delivering complete reporting solutions. - Collaborate with Analysts, Business Intelligence Engineers and Product Managers to implement algorithms that exploit rich data sets for statistical analysis, and machine learning. - Participate in strategic & tactical planning discussions, including annual budget processes. - Communicate effectively with product/business/tech-teams/other Data teams. Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 2 weeks ago

Apply

1.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Description Amazon's Last Mile Analytics & Quality (LMAQ) Maps team is building data-driven solutions to power the Last Mile delivery network that will serve hundreds of millions of customers worldwide. The Analytics team develops systems that model and optimize delivery operations through complex navigation and mapping datasets. The team specializes in processing and analyzing large-scale map and routing data across global markets. We work cross-functionally to seamlessly analyze and enhance last mile delivery network efficiency and service quality through sophisticated data processing pipelines. Our team is seeking a passionate and data-driven Business Analyst with experience in handling large-scale datasets to lead our efforts in enhancing driver experience and operational efficiency through advanced business analytics. This role is inherently cross-functional- you will work closely with engineering, operations, product teams and other stakeholders on last mile delivery challenges. Through close collaboration and by conducting analysis using statistical techniques and data visualizations, you will drive these challenges to resolution. The ideal candidate has a background in business analytics, experience with large-scale data processing, logistics understanding, project management skills, and a strong customer-centric approach to drive improvements in last-mile delivery. This job will require strong communication skills while having the ability to work independently in an evolving environment. Passion and drive for customer service is a must. Key job responsibilities Analyze complex business problems and develop data-driven solutions using SQL, Python, or R Handle and analyze large-scale navigation datasets, map datasets and map attributes Run and automate ETL jobs for processing and integrating large scale datasets Implement quality control measures for navigation and mapping data Develop dashboards and reports using tools like Tableau/PowerBI to track key performance metrics Perform statistical analysis and create predictive models Design and implement data quality checks and validation processes Collaborate with stakeholders to identify business needs and opportunities Lead process improvement initiatives Translate business requirements into technical specifications Present findings and recommendations to leadership Basic Qualifications Bachelor's degree or equivalent Experience defining requirements and using data and metrics to draw business insights Experience with SQL or ETL 1+ years of Excel or Tableau (data manipulation, macros, charts and pivot tables) experience Preferred Qualifications Experience in Amazon Redshift and other AWS technologies Experience using databases with a large-scale data set Experience with reporting and Data Visualization tools such as Quick Sight / Tableau / Power BI or other BI packages Experience writing business requirements documents, functional specifications, and use cases Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ - H84 Job ID: A2998722 Show more Show less

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Description When you attract people who have the DNA of pioneers and the DNA of explorers, you build a company of like-minded people who want to invent. And that’s what they think about when they get up in the morning: how are we going to work backwards from customers and build a great service or a great product” – Jeff Bezos Amazon.com’s success is built on a foundation of customer obsession. Have you ever thought about what it takes to successfully deliver millions of packages to Amazon customers seamlessly every day like a clock work? In order to make that happen, behind those millions of packages, billions of decision gets made by machines and humans. What is the accuracy of customer provided address? Do we know exact location of the address on Map? Is there a safe place? Can we make unattended delivery? Would signature be required? If the address is commercial property? Do we know open business hours of the address? What if customer is not home? Is there an alternate delivery address? Does customer have any special preference? What are other addresses that also have packages to be delivered on the same day? Are we optimizing delivery associate’s route? Does delivery associate know locality well enough? Is there an access code to get inside building? And the list simply goes on. At the core of all of it lies quality of underlying data that can help make those decisions in time. The person in this role will be a strong influencer who will ensure goal alignment with Technology, Operations, and Finance teams. This role will serve as the face of the organization to global stakeholders. This position requires a results-oriented, high-energy, dynamic individual with both stamina and mental quickness to be able to work and thrive in a fast-paced, high-growth global organization. Excellent communication skills and executive presence to get in front of VPs and SVPs across Amazon will be imperative. Key Strategic Objectives: Amazon is seeking an experienced leader to own the vision for quality improvement through global address management programs. As a Business Intelligence Engineer of Amazon last mile quality team, you will be responsible for shaping the strategy and direction of customer-facing products that are core to the customer experience. As a key member of the last mile leadership team, you will continually raise the bar on both quality and performance. You will bring innovation, a strategic perspective, a passionate voice, and an ability to prioritize and execute on a fast-moving set of priorities, competitive pressures, and operational initiatives. You will partner closely with product and technology teams to define and build innovative and delightful experiences for customers. You must be highly analytical, able to work extremely effectively in a matrix organization, and have the ability to break complex problems down into steps that drive product development at Amazon speed. You will set the tempo for defect reduction through continuous improvement and drive accountability across multiple business units in order to deliver large scale high visibility/ high impact projects. You will lead by example to be just as passionate about operational performance and predictability as you will be about all other aspects of customer experience. The Successful Candidate Will Be Able To Effectively manage customer expectations and resolve conflicts that balance client and company needs. Develop process to effectively maintain and disseminate project information to stakeholders. Be successful in a delivery focused environment and determining the right processes to make the team successful. This opportunity requires excellent technical, problem solving, and communication skills. The candidate is not just a policy maker/spokesperson but drives to get things done. Possess superior analytical abilities and judgment. Use quantitative and qualitative data to prioritize and influence, show creativity, experimentation and innovation, and drive projects with urgency in this fast-paced environment. Partner with key stakeholders to develop the vision and strategy for customer experience on our platforms. Influence product roadmaps based on this strategy along with your teams. Support the scalable growth of the company by developing and enabling the success of the Operations leadership team. Serve as a role model for Amazon Leadership Principles inside and outside the organization Actively seek to implement and distribute best practices across the operation Basic Qualifications 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with data modeling, warehousing and building ETL pipelines Experience writing complex SQL queries Experience in Statistical Analysis packages such as R, SAS and Matlab Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling Preferred Qualifications Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - Amazon Dev Center India - Hyderabad Job ID: A2998446 Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

India

Remote

Linkedin logo

Red is looking for a hands-on Data Analyst who thrives in messy datasets and can transform chaos into clarity. Dissect, manipulate, debug it, and make it work for marketing outcomes. You'll play a key part in managing, modeling, and optimizing our data infrastructure on AWS to drive decisions across product, marketing, and analytics. What You’ll Do Complex data sets in AWS (S3, Redshift, Athena, Glue, etc.) and SQL environments to extract actionable insights. Design and implement SQL-based data models to support marketing analytics and reporting needs. Debug and troubleshoot ETL pipelines and data issues, working cross-functionally to resolve inconsistencies and anomalies. Perform advanced data wrangling and transformation tasks using SQL, Python (preferred), or similar tools. Collaborate with data engineers, product managers, and marketing teams to define data requirements and deliver clean, trusted datasets. Build automated dashboards and reports to support data-driven marketing campaigns and customer segmentation strategies. Own data quality – proactively identify and fix data integrity issues. Participate in sprint planning, retrospectives, and roadmap discussions as a data SME. What You’ll Need 3–5+ years of experience as a Data Analyst, Data Engineer, or similar role with hands-on experience in AWS cloud data tools. Strong proficiency in SQL – you can write complex queries, optimize performance, and model data like a pro. Experience with AWS data services: Redshift, S3, Athena, Glue, Lambda, etc. A deep understanding of data modeling (dimensional, relational, and denormalized models). Comfort with debugging messy datasets, joining disparate sources, and building clean datasets from scratch. If this matches your skillset, please attach your CV in WORD format to get started! Show more Show less

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Linkedin logo

Overview As a Data Modeler & Functional Data Senior Analyst, your focus would be to partner with D&A Data Foundation team members to create data models for Global projects. This would include independently analysing project data needs, identifying data storage and integration needs/issues, and driving opportunities for data model reuse, satisfying project requirements. Role will advocate Enterprise Architecture, Data Design, D&A standards, and best practices. You will be performing all aspects of Data Modeling working closely with Data Governance, Data Engineering and Data Architecture teams. As a member of the Data Modeling team, you will create data models for very large and complex data applications in public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like Master Data, Finance, Revenue Management, Supply chain, Manufacturing and Logistics. The primary responsibility of this role is to work with Data Product Owners, Data Management Owners, and Data Engineering teams to create physical and logical data models with an extensible philosophy to support future, unknown use cases with minimal rework. You'll be working in a hybrid environment with in-house, on-premises data sources as well as cloud and remote systems. You will establish data design patterns that will drive flexible, scalable, and efficient data models to maximize value and reuse. Responsibilities Complete conceptual, logical and physical data models for any supported platform, including SQL Data Warehouse, EMR, Spark, Data Bricks, Snowflake, Azure Synapse or other Cloud data warehousing technologies. Governs Data Design/Modeling - documentation of metadata (business definitions of entities and attributes) and construction of database objects, for baseline and investment funded projects, as assigned. Provides and/or supports data analysis, requirements gathering, solution development, and design reviews for enhancements to existing, or new, applications/reporting. Lead and Support assigned project contractors (both on & offshore), orienting new contractors to standards, best practices, and tools. Contributes to project cost estimates, working with senior members of team to evaluate the size and complexity of enhancements or new development. Ensure physical and logical data models are designed with an extensible philosophy to support future, unknown use cases with minimal rework. Partner with IT, data engineering and other teams to ensure the enterprise data model incorporates key dimensions needed for proper management of: business and financial policies, security, local-market regulatory rules, consumer privacy by design principles (PII management) and all linked across fundamental identity foundations. Drive collaborative reviews of design, code, data, security features implementation performed by data engineers to drive data product development. Analyze/profile source data and identify issues that impact accuracy, completeness, consistency, integrity, timeliness and validity. Create Source to Target Mapping documents including identifying and documenting data transformations. Assume accountability and responsibility for assigned product delivery, be flexible and able to work with ambiguity, changing priorities, tight timelines and critical situations/issues. Partner with the Data Governance team to standardize their classification of unstructured data into standard structures for data discovery and action by business customers and stakeholders. Support data lineage and mapping of source system data to canonical data stores for research, analysis and productization. Qualifications BA or BS degree required in Data Science/Management/Engineering, Business Analytics, Information Systems, Software Engineering or related Technology Discipline. 8+ years of overall technology experience that includes at least 4+ years of Data Modeling and Systems Architecture/Integration. 3+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools. 4+ years of experience developing Enterprise Data Models. 3+ years of Functional experience with SAP Master Data Governance (MDG) including use of T-Codes to create/update records and query tables. Extensive knowledge of all core Master Data tables, Reference Tables, IDoc structures. 3+ years of experience with Customer & Supplier Master Data. Strong SQL skills with ability to understand and write complex queries. Strong understanding of Data Life cycle, Integration and Master Data Management principles. Excellent verbal and written communication and collaboration skills. Strong Excel skills for data analysis and manipulation. Strong analytical and problem-solving skills. Expertise in Data Modeling tools (ER/Studio, Erwin, IDM/ARDM models). Experience with integration of multi cloud services (Azure) with on-premises technologies. Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one MPP database technology such as Redshift, Synapse, Teradata or Snowflake. Experience with version control systems like GitHub and deployment & CI tools. Working knowledge of agile development, including Dev Ops and Data Ops concepts. Experience mapping disparate data sources into a common canonical model. Differentiating Competencies Experience with metadata management, data lineage and data glossaries. Experience with Azure Data Factory, Databricks and Azure Machine learning. Familiarity with business intelligence tools (such as Power BI). CPG industry experience. Experience with Material, Location, Finance, Supply Chain, Logistics, Manufacturing & Revenue Management Data. Show more Show less

Posted 2 weeks ago

Apply

1.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

JOB_POSTING-3-70891 Job Description Role Title : Analyst, Analytics - Data Quality Developer(L08) Company Overview : Synchrony (NYSE: SYF) is a premier consumer financial services company delivering one of the industry’s most complete digitally enabled product suites. Our experience, expertise and scale encompass a broad spectrum of industries including digital, health and wellness, retail, telecommunications, home, auto, outdoors, pet and more. We have recently been ranked #2 among India’s Best Companies to Work for by Great Place to Work. We were among the Top 50 India’s Best Workplaces in Building a Culture of Innovation by All by GPTW and Top 25 among Best Workplaces in BFSI by GPTW. We have also been recognized by AmbitionBox Employee Choice Awards among the Top 20 Mid-Sized Companies, ranked #3 among Top Rated Companies for Women, and Top-Rated Financial Services Companies. Synchrony celebrates ~51% women diversity, 105+ people with disabilities, and ~50 veterans and veteran family members. We offer Flexibility and Choice for all employees and provide best-in-class employee benefits and programs that cater to work-life integration and overall well-being. We provide career advancement and upskilling opportunities, focusing on Advancing Diverse Talent to take up leadership roles Organizational Overview Our Analytics organization comprises of data analysts who focus on enabling strategies to enhance customer and partner experience and optimize business performance through data management and development of full stack descriptive to prescriptive analytics solutions using cutting edge technologies thereby enabling business growth. Role Summary/Purpose The Analyst, Analytics - Data Quality Developer (Individual Contributor) role is located in the India Analytics Hub (IAH) as part of Synchrony’s enterprise Data Office. This role will be responsible for the proactive design, implementation, execution, and monitoring of Data Quality process capabilities within Synchrony’s Public and Private cloud and on-prem environments within the Chief Data Office. The Data Quality Developer – Analyst will work within the IT organization to support and participate in build and run activities and environment (e.g. DevOps) for Data Quality. Key Responsibilities Monitor and maintain Data Quality and Data Issue Management operating level agreements in support of data quality rule execution and reporting Assist in performing root cause analysis for data quality issues and data usage challenges, particularly for the workload migration to the public cloud. Recommend, design, implement and refine / remediate data quality specifications within Synchrony’s approved Data Quality platforms Participate in the solution design of data quality and data issue management technical and procedural solutions, including metric reporting Work closely with Technology teams and key stakeholders to ensure the data quality issues are prioritized, analyzed and addressed Regularly communicate the states of data quality issues and progress to key stakeholders Participate in the planning and execution of agile release cycles and iterations Qualifications/Requirements Minimum of 1 years’ experience in data quality management, including implementing data quality rules, data profiling and root cause analysis for data issues, with exposure to cloud environments (AWS, Azure, or Google Cloud) and on-premise infrastructure. Minimum of 1 years’ experience with data quality or data integration tools such as Ab Initio, Informatica, Collibra, Stonebranch or Tableau, gained through hands-on experience or projects. Good communication and collaboration skills, strong analytical thinking and problem-solving abilities, ability to work independently and manage multiple tasks, and attention to detail. Desired Characteristics Broad understanding of banking, credit card, payment solutions, collections, marketing, risk and regulatory & compliance. Experience using data governance and data quality tools such as: Collibra, Ab Initio Express>IT; Ab Initio MetaHub. Proficient in writing / understanding SQL. Experience querying/analyzing data in cloud-based environments (e.g, AWS, Redshift) AWS certifications such as AWS Cloud practitioner, AWS Certified Data Analytics – Specialty Intermediate to advanced MS Office Suite skills including Power Point, Excel, Access, Visio. Strong relationship management and influencing skills to build enduring and productive alliances across matrix organizations. Demonstrated success in managing multiple deliverables concurrently often within aggressive timeframes; ability to cope under time pressure. Experience in partnering with a diverse team composed of staff and consultants located in multiple locations and time zones. Eligibility Criteria: Bachelor’s Degree, preferably in Engineering or Computer Science with more than 1 years’ hands-on Data Management experience or in lieu of a degree with more than 3 years’ experience. Work Timings: This role qualifies for Enhanced Flexibility and Choice offered in Synchrony India and will require the incumbent to be available between 06:00 AM Eastern Time – 11:30 AM Eastern Time (timings are anchored to US Eastern hours and will adjust twice a year locally). This window is for meetings with India and US teams. The remaining hours will be flexible for the employee to choose. Exceptions may apply periodically due to business needs. Please discuss this with the hiring manager for more details. For Internal Applicants Understand the criteria or mandatory skills required for the role, before applying Inform your manager and HRM before applying for any role on Workday Ensure that your professional profile is updated (fields such as education, prior experience, other skills) and it is mandatory to upload your updated resume (Word or PDF format) Must not be any corrective action plan (Formal/Final Formal) or PIP L4 to L7 Employees who have completed 12 months in the organization and 12 months in their current role and level are eligible. L8+ Employees who have completed 18 months in the organization and 12 months in their current role and level are eligible. Grade/Level: 08 Job Family Group Information Technology Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Data Engineer Location: Bangalore Exp: 5-8 yrs ● Expertise in data analysis methodologies and processes and their linkages to other processes ● Technical expertise with data models, data mining, and segmentation techniques ● Advanced SQL skills and experience with relational databases and database design. ● Strong knowledge on Python, PySpark, SQL, Java Script . ● Experience building and deploying machine learning models ● Experience with integration efforts (packaged and customized applications) from a data analysis perspective ● Strong business analyst skills, able to work with many different stakeholders to elicit and document requirements ● Solid customer service skills and interpersonal skills. ● Critical thinking skills and attention to detail. ● Good judgement, initiative, commitment and resourcefulness ● Proficient with Skywise platform and tools e.g. Contour, Code-Workbook, Code, Slate and Ontology definitions. ● Knowledge of airline and MRO operations (preferred). ● AWS Cloud Services Skills on services such as EC2, RDS, and Redshift. (Good to have) Responsibilities: ● Interact with and work collaboratively with product teams ● Analyze and organize raw data ● Build data systems and pipelines ● Evaluate business needs and objectives ● Interpret trends and patterns ● Conduct complex data analysis and report on results ● Combine raw information from different sources ● Explore ways to enhance data quality and reliability ● Independently develop solutions (workflows, small apps, and dashboards) on the Skywise platform. ● Collaborate with peers from other teams to deliver best in class solutions (workflows, small apps, and dashboards) to end users in the airlines. ● Strong airline domain knowledge to engage with airline end users to articulate the pain points / business requirements of the airline end-users. ● Create a repository of solutions developed. Show more Show less

Posted 2 weeks ago

Apply

Exploring Redshift Jobs in India

The job market for redshift professionals in India is growing rapidly as more companies adopt cloud data warehousing solutions. Redshift, a powerful data warehouse service provided by Amazon Web Services, is in high demand due to its scalability, performance, and cost-effectiveness. Job seekers with expertise in redshift can find a plethora of opportunities in various industries across the country.

Top Hiring Locations in India

  1. Bangalore
  2. Hyderabad
  3. Mumbai
  4. Pune
  5. Chennai

Average Salary Range

The average salary range for redshift professionals in India varies based on experience and location. Entry-level positions can expect a salary in the range of INR 6-10 lakhs per annum, while experienced professionals can earn upwards of INR 20 lakhs per annum.

Career Path

In the field of redshift, a typical career path may include roles such as: - Junior Developer - Data Engineer - Senior Data Engineer - Tech Lead - Data Architect

Related Skills

Apart from expertise in redshift, proficiency in the following skills can be beneficial: - SQL - ETL Tools - Data Modeling - Cloud Computing (AWS) - Python/R Programming

Interview Questions

  • What is Amazon Redshift and how does it differ from traditional databases? (basic)
  • How does data distribution work in Amazon Redshift? (medium)
  • Explain the difference between SORTKEY and DISTKEY in Redshift. (medium)
  • How do you optimize query performance in Amazon Redshift? (advanced)
  • What is the COPY command in Redshift used for? (basic)
  • How do you handle large data sets in Redshift? (medium)
  • Explain the concept of Redshift Spectrum. (advanced)
  • What is the difference between Redshift and Redshift Spectrum? (medium)
  • How do you monitor and manage Redshift clusters? (advanced)
  • Can you describe the architecture of Amazon Redshift? (medium)
  • What are the best practices for data loading in Redshift? (medium)
  • How do you handle concurrency in Redshift? (advanced)
  • Explain the concept of vacuuming in Redshift. (basic)
  • What are Redshift's limitations and how do you work around them? (advanced)
  • How do you scale Redshift clusters for performance? (medium)
  • What are the different node types available in Amazon Redshift? (basic)
  • How do you secure data in Amazon Redshift? (medium)
  • Explain the concept of Redshift Workload Management (WLM). (advanced)
  • What are the benefits of using Redshift over traditional data warehouses? (basic)
  • How do you optimize storage in Amazon Redshift? (medium)
  • What is the difference between Redshift and Redshift Spectrum? (medium)
  • How do you troubleshoot performance issues in Amazon Redshift? (advanced)
  • Can you explain the concept of columnar storage in Redshift? (basic)
  • How do you automate tasks in Redshift? (medium)
  • What are the different types of Redshift nodes and their use cases? (basic)

Conclusion

As the demand for redshift professionals continues to rise in India, job seekers should focus on honing their skills and knowledge in this area to stay competitive in the job market. By preparing thoroughly and showcasing their expertise, candidates can secure rewarding opportunities in this fast-growing field. Good luck with your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies