Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
5.0 years
0 Lacs
India
Remote
Red is looking for a hands-on Data Analyst who thrives in messy datasets and can transform chaos into clarity. Dissect, manipulate, debug it, and make it work for marketing outcomes. You'll play a key part in managing, modeling, and optimizing our data infrastructure on AWS to drive decisions across product, marketing, and analytics. What You’ll Do Complex data sets in AWS (S3, Redshift, Athena, Glue, etc.) and SQL environments to extract actionable insights. Design and implement SQL-based data models to support marketing analytics and reporting needs. Debug and troubleshoot ETL pipelines and data issues, working cross-functionally to resolve inconsistencies and anomalies. Perform advanced data wrangling and transformation tasks using SQL, Python (preferred), or similar tools. Collaborate with data engineers, product managers, and marketing teams to define data requirements and deliver clean, trusted datasets. Build automated dashboards and reports to support data-driven marketing campaigns and customer segmentation strategies. Own data quality – proactively identify and fix data integrity issues. Participate in sprint planning, retrospectives, and roadmap discussions as a data SME. What You’ll Need 3–5+ years of experience as a Data Analyst, Data Engineer, or similar role with hands-on experience in AWS cloud data tools. Strong proficiency in SQL – you can write complex queries, optimize performance, and model data like a pro. Experience with AWS data services: Redshift, S3, Athena, Glue, Lambda, etc. A deep understanding of data modeling (dimensional, relational, and denormalized models). Comfort with debugging messy datasets, joining disparate sources, and building clean datasets from scratch. If this matches your skillset, please attach your CV in WORD format to get started! Show more Show less
Posted 2 weeks ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
Overview As a Data Modeler & Functional Data Senior Analyst, your focus would be to partner with D&A Data Foundation team members to create data models for Global projects. This would include independently analysing project data needs, identifying data storage and integration needs/issues, and driving opportunities for data model reuse, satisfying project requirements. Role will advocate Enterprise Architecture, Data Design, D&A standards, and best practices. You will be performing all aspects of Data Modeling working closely with Data Governance, Data Engineering and Data Architecture teams. As a member of the Data Modeling team, you will create data models for very large and complex data applications in public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like Master Data, Finance, Revenue Management, Supply chain, Manufacturing and Logistics. The primary responsibility of this role is to work with Data Product Owners, Data Management Owners, and Data Engineering teams to create physical and logical data models with an extensible philosophy to support future, unknown use cases with minimal rework. You'll be working in a hybrid environment with in-house, on-premises data sources as well as cloud and remote systems. You will establish data design patterns that will drive flexible, scalable, and efficient data models to maximize value and reuse. Responsibilities Complete conceptual, logical and physical data models for any supported platform, including SQL Data Warehouse, EMR, Spark, Data Bricks, Snowflake, Azure Synapse or other Cloud data warehousing technologies. Governs Data Design/Modeling - documentation of metadata (business definitions of entities and attributes) and construction of database objects, for baseline and investment funded projects, as assigned. Provides and/or supports data analysis, requirements gathering, solution development, and design reviews for enhancements to existing, or new, applications/reporting. Lead and Support assigned project contractors (both on & offshore), orienting new contractors to standards, best practices, and tools. Contributes to project cost estimates, working with senior members of team to evaluate the size and complexity of enhancements or new development. Ensure physical and logical data models are designed with an extensible philosophy to support future, unknown use cases with minimal rework. Partner with IT, data engineering and other teams to ensure the enterprise data model incorporates key dimensions needed for proper management of: business and financial policies, security, local-market regulatory rules, consumer privacy by design principles (PII management) and all linked across fundamental identity foundations. Drive collaborative reviews of design, code, data, security features implementation performed by data engineers to drive data product development. Analyze/profile source data and identify issues that impact accuracy, completeness, consistency, integrity, timeliness and validity. Create Source to Target Mapping documents including identifying and documenting data transformations. Assume accountability and responsibility for assigned product delivery, be flexible and able to work with ambiguity, changing priorities, tight timelines and critical situations/issues. Partner with the Data Governance team to standardize their classification of unstructured data into standard structures for data discovery and action by business customers and stakeholders. Support data lineage and mapping of source system data to canonical data stores for research, analysis and productization. Qualifications BA or BS degree required in Data Science/Management/Engineering, Business Analytics, Information Systems, Software Engineering or related Technology Discipline. 8+ years of overall technology experience that includes at least 4+ years of Data Modeling and Systems Architecture/Integration. 3+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools. 4+ years of experience developing Enterprise Data Models. 3+ years of Functional experience with SAP Master Data Governance (MDG) including use of T-Codes to create/update records and query tables. Extensive knowledge of all core Master Data tables, Reference Tables, IDoc structures. 3+ years of experience with Customer & Supplier Master Data. Strong SQL skills with ability to understand and write complex queries. Strong understanding of Data Life cycle, Integration and Master Data Management principles. Excellent verbal and written communication and collaboration skills. Strong Excel skills for data analysis and manipulation. Strong analytical and problem-solving skills. Expertise in Data Modeling tools (ER/Studio, Erwin, IDM/ARDM models). Experience with integration of multi cloud services (Azure) with on-premises technologies. Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one MPP database technology such as Redshift, Synapse, Teradata or Snowflake. Experience with version control systems like GitHub and deployment & CI tools. Working knowledge of agile development, including Dev Ops and Data Ops concepts. Experience mapping disparate data sources into a common canonical model. Differentiating Competencies Experience with metadata management, data lineage and data glossaries. Experience with Azure Data Factory, Databricks and Azure Machine learning. Familiarity with business intelligence tools (such as Power BI). CPG industry experience. Experience with Material, Location, Finance, Supply Chain, Logistics, Manufacturing & Revenue Management Data. Show more Show less
Posted 2 weeks ago
1.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
JOB_POSTING-3-70891 Job Description Role Title : Analyst, Analytics - Data Quality Developer(L08) Company Overview : Synchrony (NYSE: SYF) is a premier consumer financial services company delivering one of the industry’s most complete digitally enabled product suites. Our experience, expertise and scale encompass a broad spectrum of industries including digital, health and wellness, retail, telecommunications, home, auto, outdoors, pet and more. We have recently been ranked #2 among India’s Best Companies to Work for by Great Place to Work. We were among the Top 50 India’s Best Workplaces in Building a Culture of Innovation by All by GPTW and Top 25 among Best Workplaces in BFSI by GPTW. We have also been recognized by AmbitionBox Employee Choice Awards among the Top 20 Mid-Sized Companies, ranked #3 among Top Rated Companies for Women, and Top-Rated Financial Services Companies. Synchrony celebrates ~51% women diversity, 105+ people with disabilities, and ~50 veterans and veteran family members. We offer Flexibility and Choice for all employees and provide best-in-class employee benefits and programs that cater to work-life integration and overall well-being. We provide career advancement and upskilling opportunities, focusing on Advancing Diverse Talent to take up leadership roles Organizational Overview Our Analytics organization comprises of data analysts who focus on enabling strategies to enhance customer and partner experience and optimize business performance through data management and development of full stack descriptive to prescriptive analytics solutions using cutting edge technologies thereby enabling business growth. Role Summary/Purpose The Analyst, Analytics - Data Quality Developer (Individual Contributor) role is located in the India Analytics Hub (IAH) as part of Synchrony’s enterprise Data Office. This role will be responsible for the proactive design, implementation, execution, and monitoring of Data Quality process capabilities within Synchrony’s Public and Private cloud and on-prem environments within the Chief Data Office. The Data Quality Developer – Analyst will work within the IT organization to support and participate in build and run activities and environment (e.g. DevOps) for Data Quality. Key Responsibilities Monitor and maintain Data Quality and Data Issue Management operating level agreements in support of data quality rule execution and reporting Assist in performing root cause analysis for data quality issues and data usage challenges, particularly for the workload migration to the public cloud. Recommend, design, implement and refine / remediate data quality specifications within Synchrony’s approved Data Quality platforms Participate in the solution design of data quality and data issue management technical and procedural solutions, including metric reporting Work closely with Technology teams and key stakeholders to ensure the data quality issues are prioritized, analyzed and addressed Regularly communicate the states of data quality issues and progress to key stakeholders Participate in the planning and execution of agile release cycles and iterations Qualifications/Requirements Minimum of 1 years’ experience in data quality management, including implementing data quality rules, data profiling and root cause analysis for data issues, with exposure to cloud environments (AWS, Azure, or Google Cloud) and on-premise infrastructure. Minimum of 1 years’ experience with data quality or data integration tools such as Ab Initio, Informatica, Collibra, Stonebranch or Tableau, gained through hands-on experience or projects. Good communication and collaboration skills, strong analytical thinking and problem-solving abilities, ability to work independently and manage multiple tasks, and attention to detail. Desired Characteristics Broad understanding of banking, credit card, payment solutions, collections, marketing, risk and regulatory & compliance. Experience using data governance and data quality tools such as: Collibra, Ab Initio Express>IT; Ab Initio MetaHub. Proficient in writing / understanding SQL. Experience querying/analyzing data in cloud-based environments (e.g, AWS, Redshift) AWS certifications such as AWS Cloud practitioner, AWS Certified Data Analytics – Specialty Intermediate to advanced MS Office Suite skills including Power Point, Excel, Access, Visio. Strong relationship management and influencing skills to build enduring and productive alliances across matrix organizations. Demonstrated success in managing multiple deliverables concurrently often within aggressive timeframes; ability to cope under time pressure. Experience in partnering with a diverse team composed of staff and consultants located in multiple locations and time zones. Eligibility Criteria: Bachelor’s Degree, preferably in Engineering or Computer Science with more than 1 years’ hands-on Data Management experience or in lieu of a degree with more than 3 years’ experience. Work Timings: This role qualifies for Enhanced Flexibility and Choice offered in Synchrony India and will require the incumbent to be available between 06:00 AM Eastern Time – 11:30 AM Eastern Time (timings are anchored to US Eastern hours and will adjust twice a year locally). This window is for meetings with India and US teams. The remaining hours will be flexible for the employee to choose. Exceptions may apply periodically due to business needs. Please discuss this with the hiring manager for more details. For Internal Applicants Understand the criteria or mandatory skills required for the role, before applying Inform your manager and HRM before applying for any role on Workday Ensure that your professional profile is updated (fields such as education, prior experience, other skills) and it is mandatory to upload your updated resume (Word or PDF format) Must not be any corrective action plan (Formal/Final Formal) or PIP L4 to L7 Employees who have completed 12 months in the organization and 12 months in their current role and level are eligible. L8+ Employees who have completed 18 months in the organization and 12 months in their current role and level are eligible. Grade/Level: 08 Job Family Group Information Technology Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Data Engineer Location: Bangalore Exp: 5-8 yrs ● Expertise in data analysis methodologies and processes and their linkages to other processes ● Technical expertise with data models, data mining, and segmentation techniques ● Advanced SQL skills and experience with relational databases and database design. ● Strong knowledge on Python, PySpark, SQL, Java Script . ● Experience building and deploying machine learning models ● Experience with integration efforts (packaged and customized applications) from a data analysis perspective ● Strong business analyst skills, able to work with many different stakeholders to elicit and document requirements ● Solid customer service skills and interpersonal skills. ● Critical thinking skills and attention to detail. ● Good judgement, initiative, commitment and resourcefulness ● Proficient with Skywise platform and tools e.g. Contour, Code-Workbook, Code, Slate and Ontology definitions. ● Knowledge of airline and MRO operations (preferred). ● AWS Cloud Services Skills on services such as EC2, RDS, and Redshift. (Good to have) Responsibilities: ● Interact with and work collaboratively with product teams ● Analyze and organize raw data ● Build data systems and pipelines ● Evaluate business needs and objectives ● Interpret trends and patterns ● Conduct complex data analysis and report on results ● Combine raw information from different sources ● Explore ways to enhance data quality and reliability ● Independently develop solutions (workflows, small apps, and dashboards) on the Skywise platform. ● Collaborate with peers from other teams to deliver best in class solutions (workflows, small apps, and dashboards) to end users in the airlines. ● Strong airline domain knowledge to engage with airline end users to articulate the pain points / business requirements of the airline end-users. ● Create a repository of solutions developed. Show more Show less
Posted 2 weeks ago
3.0 - 5.0 years
4 - 8 Lacs
Hyderabad
Work from Office
THE TEAM This opportunity is to join a team of highly technically skilled engineers, who are redesigning Creditsafe data platform with high throughput and scalability as primary goal. The data delivery platform is being built upon AWS redshift and S3 cloud storage. The platform expected to manage over billion objects along with daily increment of more than 10 million objects while handling addition, deletion and correction of our data and indexes in auditable manner. Our data processing application is entirely based on Python and designed to efficiently transform incoming raw data volume into API consumable schema. The team is also building high available and low latency APIs to enable our clients with faster data delivery. JOB PROFILE Join us to take the above project of redesigning the Creditsafe platform into the cloud space. You will be expected to work with technologies such as Python, Airflow, Redshift, DynamoDB, AWS Glue, S3. KEY DUTIES AND RESPONSIBILITIES You will actively contribute to the codebase and participate in peer reviews. Design and build metadata driven, event based distributed data processing platform using technologies such as Python, Airflow, Redshift, DynamoDB, AWS Glue, S3. As an experienced Engineer, you will play a critical role in the design, development, and deployment of our business-critical system. You will be building and scaling Creditsafe APIs to securely support over 1000 transactions per second using server less technologies. Execute practices such as continuous integration and test-driven development to enable the rapid delivery of working code. Understanding company domain data to make recommendations to improve existing product. The responsibilities detailed above are not exhaustive and you may be requested to take on additional responsibilities deemed as reasonable by their direct line manager. SKILLS AND QUALIFICATIONS Demonstrate ability to write clean efficient code and knit it together with cloud environment for best performance. Proven experience of development within a commercial environment creating production grade APIs and data pipelines in python. You are looking to grow your skills through daily technical challenges and enjoy problem solving and whiteboarding in collaboration with a team. You have excellent communication skills, and the ability to explain your views clearly to the team and are open to understanding theirs. Have a proven track record to draw from a deep and broad technical expertise to mentor engineers, complete hands-on technical work, and provide leadership on complex technology issues. Share your ideas collaboratively via wikis, discussions boards, etc and share any decisions made, for the benefit of others.
Posted 2 weeks ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
Overview As an Analyst, Data Modeler, your focus would be to partner with D&A Data Foundation team members to create data models for Global projects. This would include independently analysing project data needs, identifying data storage and integration needs/issues, and driving opportunities for data model reuse, satisfying project requirements. Role will advocate Enterprise Architecture, Data Design, and D&A standards, and best practices. You will be performing all aspects of Data Modeling working closely with Data Governance, Data Engineering and Data Architects teams. As a member of the data Modeling team, you will create data models for very large and complex data applications in public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. The primary responsibilities of this role are to work with data product owners, data management owners, and data engineering teams to create physical and logical data models with an extensible philosophy to support future, unknown use cases with minimal rework. You'll be working in a hybrid environment with in-house, on-premises data sources as well as cloud and remote systems. You will establish data design patterns that will drive flexible, scalable, and efficient data models to maximize value and reuse. Responsibilities Complete conceptual, logical and physical data models for any supported platform, including SQL Data Warehouse, EMR, Spark, Data Bricks, Snowflake, Azure Synapse or other Cloud data warehousing technologies. Governs data design/modeling - documentation of metadata (business definitions of entities and attributes) and constructions database objects, for baseline and investment funded projects, as assigned. Provides and/or supports data analysis, requirements gathering, solution development, and design reviews for enhancements to, or new, applications/reporting. Supports assigned project contractors (both on- & off-shore), orienting new contractors to standards, best practices, and tools. Contributes to project cost estimates, working with senior members of team to evaluate the size and complexity of the changes or new development. Ensure physical and logical data models are designed with an extensible philosophy to support future, unknown use cases with minimal rework. Develop a deep understanding of the business domain and enterprise technology inventory to craft a solution roadmap that achieves business objectives, maximizes reuse. Partner with IT, data engineering and other teams to ensure the enterprise data model incorporates key dimensions needed for the proper management: business and financial policies, security, local-market regulatory rules, consumer privacy by design principles (PII management) and all linked across fundamental identity foundations. Drive collaborative reviews of design, code, data, security features implementation performed by data engineers to drive data product development. Assist with data planning, sourcing, collection, profiling, and transformation. Create Source To Target Mappings for ETL and BI developers. Show expertise for data at all levels: low-latency, relational, and unstructured data stores; analytical and data lakes; data streaming (consumption/production), data in-transit. Develop reusable data models based on cloud-centric, code-first approaches to data management and cleansing. Partner with the Data Governance team to standardize their classification of unstructured data into standard structures for data discovery and action by business customers and stakeholders. Support data lineage and mapping of source system data to canonical data stores for research, analysis and productization. Qualifications Bachelor’s degree required in Computer Science, Data Management/Analytics/Science, Information Systems, Software Engineering or related Technology Discipline. 5+ years of overall technology experience that includes at least 2+ years of data modeling and systems architecture. Around 2+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools. 2+ years of experience developing enterprise data models. Experience in building solutions in the retail or in the supply chain space. Expertise in data modeling tools (ER/Studio, Erwin, IDM/ARDM models). Experience with integration of multi cloud services (Azure) with on-premises technologies. Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one MPP database technology such as Redshift, Synapse, Teradata or Snowflake. Experience with version control systems like GitHub and deployment & CI tools. Experience with Azure Data Factory, Databricks and Azure Machine learning is a plus. Experience of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including Dev Ops and Data Ops concepts. Familiarity with business intelligence tools (such as Power BI). Excellent verbal and written communication and collaboration skills. Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Kochi, Kerala, India
On-site
Role: Data Analyst Location: Kochi & Trivandrum Experience: 5+years Mandatory Skills: Data Analysis, Power BI, Python, SQL, Amazon Athena Notice Period: 0-20 days Job Purpose We are seeking an experienced and analytical Senior Data Analyst to join our Data & Analytics team. The idealcandidate will have a strong background in data analysis, visualization, and stakeholder communication. You will be responsible for turning data into actionable insights that help shape strategic and operational decisions across the organization. Job Description / Duties & Responsibilities • Collaborate with business stakeholders to understand data needs and translate them into analytical requirements. • Analyze large datasets to uncover trends, patterns, and actionable insights. • Design and build dashboards and reports using Power BI. • Perform ad-hoc analysis and develop data-driven narratives to support decision-making. • Ensure data accuracy, consistency, and integrity through data validation and quality checks. • Build and maintain SQL queries, views, and data models for reporting purposes. • Communicate findings clearly through presentations, visualizations, and written summaries. • Partner with data engineers and architects to improve data pipelines and architecture. • Contribute to the definition of KPIs, metrics, and data governance standards. Job Specification / Skills and Competencies • Bachelor’s or Master’s degree in Statistics, Mathematics, Computer Science, Economics, or a related field. • 5+ years of experience in a data analyst or business intelligence role. • Advanced proficiency in SQL and experience working with relational databases (e.g., SQL Server, Redshift, Snowflake). • Hands-on experience in Power BI. • Proficiency in Python, Excel and data storytelling. • Understanding of data modelling, ETL concepts, and basic data architecture. • Strong analytical thinking and problem-solving skills. • Excellent communication and stakeholder management skills • To adhere to the Information Security Management policies and procedures. Soft Skills Required ▪ Must be a good team player with good communication skills ▪ Must have good presentation skills ▪ Must be a pro-active problem solver and a leader by self ▪ Manage & nurture a team of data engineers Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Ernakulam, Kerala, India
On-site
Job Purpose We are seeking an experienced and analytical Senior Data Analyst to join our Data & Analytics team. The ideal candidate will have a strong background in data analysis, visualization, and stakeholder communication. You will be responsible for turning data into actionable insights that help shape strategic and operational decisions across the organization. Job Description / Duties & Responsibilities Collaborate with business stakeholders to understand data needs and translate them into analytical requirements. Analyze large datasets to uncover trends, patterns, and actionable insights. Design and build dashboards and reports using Power BI. Perform ad-hoc analysis and develop data-driven narratives to support decision-making. Ensure data accuracy, consistency, and integrity through data validation and quality checks. Build and maintain SQL queries, views, and data models for reporting purposes. Communicate findings clearly through presentations, visualizations, and written summaries. Partner with data engineers and architects to improve data pipelines and architecture. Contribute to the definition of KPIs, metrics, and data governance standards. Job Specification / Skills and Competencies Bachelor’s or Master’s degree in Statistics, Mathematics, Computer Science, Economics, or a related field. 5+ years of experience in a data analyst or business intelligence role. Advanced proficiency in SQL and experience working with relational databases (e.g., SQL Server, Redshift, Snowflake). Hands-on experience in Power BI. Proficiency in Python, Excel and data storytelling. Understanding of data modelling, ETL concepts, and basic data architecture. Strong analytical thinking and problem-solving skills. Excellent communication and stakeholder management skills To adhere to the Information Security Management policies and procedures. Soft Skills Required ▪ Must be a good team player with good communication skills ▪ Must have good presentation skills ▪ Must be a pro-active problem solver and a le Skills: aws,data validation,power bi,data analysis,data quality checks,python,sql,data modeling,stakeholder communication,excel,data storytelling,data visualization,team collaboration,problem-solving,analytical thinking,presentation skills,etl concepts,athena Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Thiruvananthapuram, Kerala, India
On-site
Job Purpose We are seeking an experienced and analytical Senior Data Analyst to join our Data & Analytics team. The ideal candidate will have a strong background in data analysis, visualization, and stakeholder communication. You will be responsible for turning data into actionable insights that help shape strategic and operational decisions across the organization. Job Description / Duties & Responsibilities Collaborate with business stakeholders to understand data needs and translate them into analytical requirements. Analyze large datasets to uncover trends, patterns, and actionable insights. Design and build dashboards and reports using Power BI. Perform ad-hoc analysis and develop data-driven narratives to support decision-making. Ensure data accuracy, consistency, and integrity through data validation and quality checks. Build and maintain SQL queries, views, and data models for reporting purposes. Communicate findings clearly through presentations, visualizations, and written summaries. Partner with data engineers and architects to improve data pipelines and architecture. Contribute to the definition of KPIs, metrics, and data governance standards. Job Specification / Skills and Competencies Bachelor’s or Master’s degree in Statistics, Mathematics, Computer Science, Economics, or a related field. 5+ years of experience in a data analyst or business intelligence role. Advanced proficiency in SQL and experience working with relational databases (e.g., SQL Server, Redshift, Snowflake). Hands-on experience in Power BI. Proficiency in Python, Excel and data storytelling. Understanding of data modelling, ETL concepts, and basic data architecture. Strong analytical thinking and problem-solving skills. Excellent communication and stakeholder management skills To adhere to the Information Security Management policies and procedures. Soft Skills Required ▪ Must be a good team player with good communication skills ▪ Must have good presentation skills ▪ Must be a pro-active problem solver and a le Skills: aws,data validation,power bi,data analysis,data quality checks,python,sql,data modeling,stakeholder communication,excel,data storytelling,data visualization,team collaboration,problem-solving,analytical thinking,presentation skills,etl concepts,athena Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Udupi, Karnataka, India
On-site
Cloud Leader (Jr. Data Architect) 7+ yrs of IT experience Should have worked on any two Structural (SQL/Oracle/Postgres) and one NoSQL Database Should be able to work with the Presales team, proposing the best solution/architecture Should have design experience on BQ/Redshift/Synapse Manage end-to-end product life cycle, from proposal to delivery, and regularly check with delivery on architecture improvement Should be aware of security protocols for in-transit data, encryption/decryption of PII data Good understanding of analytics tools for effective analysis of data Should have been part of the production deployment team and the Production Support team. Experience with Big Data tools- Hadoop, Spark, Apache Beam, Kafka etc. Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc. Experience in ETL and Data Warehousing. Experience and firm understanding of relational and non-relational databases like MySQL, MS SQL Server, Postgres, MongoDB, Cassandra, etc. Experience with cloud platforms like AWS, GCP, and Azure. Experience with workflow management using tools like Apache Airflow. Preferred Need to be Aware of Design Best Practices for OLTP and OLAP Systems Should be part of the team designing the DB and pipeline Should be able to propose the right architecture, Data Warehouse/Datamesh approaches Should be aware of data sharing and multi-cloud implementation Should have exposure to Load testing methodologies, Debugging pipelines, and Delta load handling Worked on heterogeneous migration projects Experience on multiple Cloud platforms Should have exposure to Load testing methodologies, Debugging pipelines, and Delta load handling Show more Show less
Posted 2 weeks ago
2.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities Cloud Data Engineer (AWS/Azure/Databricks/GCP) Experience :2-4 years in Data Engineering Job Description We are seeking skilled and dynamic Cloud Data Engineers specializing in AWS, Azure, Databricks, and GCP. The ideal candidate will have a strong background in data engineering, with a focus on data ingestion, transformation, and warehousing. They should also possess excellent knowledge of PySpark or Spark, and a proven ability to optimize performance in Spark job executions. Design, build, and maintain scalable data pipelines for a variety of cloud platforms including AWS, Azure, Databricks, and GCP. Implement data ingestion and transformation processes to facilitate efficient data warehousing. Utilize cloud services to enhance data processing capabilities: AWS: Glue, Athena, Lambda, Redshift, Step Functions, DynamoDB, SNS. Azure: Data Factory, Synapse Analytics, Functions, Cosmos DB, Event Grid, Logic Apps, Service Bus. GCP: Dataflow, BigQuery, DataProc, Cloud Functions, Bigtable, Pub/Sub, Data Fusion. Optimize Spark job performance to ensure high efficiency and reliability. Stay proactive in learning and implementing new technologies to improve data processing frameworks. Collaborate with cross-functional teams to deliver robust data solutions. Work on Spark Streaming for real-time data processing as necessary. Qualifications 2-4 years of experience in data engineering with a strong focus on cloud environments. Proficiency in PySpark or Spark is mandatory. Proven experience with data ingestion, transformation, and data warehousing. In-depth knowledge and hands-on experience with cloud services(AWS/Azure/GCP): Demonstrated ability in performance optimization of Spark jobs. Strong problem-solving skills and the ability to work independently as well as in a team. Cloud Certification (AWS, Azure, or GCP) is a plus. Familiarity with Spark Streaming is a bonus. Mandatory Skill Sets Python, Pyspark, SQL with (AWS or Azure or GCP) Preferred Skill Sets Python, Pyspark, SQL with (AWS or Azure or GCP) Years Of Experience Required 2-4 years Education Qualification BE/BTECH, ME/MTECH, MBA, MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Business Administration, Master of Engineering, Bachelor of Technology Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills PySpark, Python (Programming Language), Structured Query Language (SQL) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Artificial Intelligence, Big Data, C++ Programming Language, Communication, Complex Data Analysis, Data-Driven Decision Making (DIDM), Data Engineering, Data Lake, Data Mining, Data Modeling, Data Pipeline, Data Quality, Data Science, Data Science Algorithms, Data Science Troubleshooting, Data Science Workflows, Deep Learning, Emotional Regulation, Empathy, Inclusion, Intellectual Curiosity, Machine Learning {+ 12 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Solution Engineer Location: Chennai-Hybrid Type: C2H Why MResult? Founded in 2004, MResult is a global digital solutions partner trusted by leading Fortune 500 companies in industries such as pharma & healthcare, retail, and BFSI. MResult’s expertise in data and analytics, data engineering, machine learning, AI, and automation help companies streamline operations and unlock business value. As part of our team, you will collaborate with top minds in the industry to deliver cutting-edge solutions that solve real-world challenges. Website: https://mresult.com/ LinkedIn: https://www.linkedin.com/company/mresult/ What We Offer: At MResult, you can leave your mark on projects at the world’s most recognized brands, access opportunities to grow and upskill, and do your best work with the flexibility of hybrid work models. Great work is rewarded, and leaders are nurtured from within. Our values — Agility, Collaboration, Client Focus, Innovation, and Integrity — are woven into our culture, guiding every decision. What This Role Requires In the role of Solution Engineer , you will be a key contributor to MResult’s mission of empowering our clients with data-driven insights and innovative digital solutions. Each day brings exciting challenges and growth opportunities. Here is what you will do: Roles and responsibilities: Evaluates and implements solutions to meet business requirements, ensuring consistent usage and adherence to data management best practices. Collaborates with product owners to prioritize features and manage technical requirements based on business needs, new technologies, and known issues. Develops application design and documentation for leadership teams. Assists in defining the vision for the shared data model, including sourcing, transformation, and loading approaches. Manages daily operations of the team, ensuring on-time delivery of milestones. Accountable for end-to-end delivery of program outcomes within budget, aligning with relevant business units. Fosters collaboration with internal and external stakeholders, including software vendors and data providers. Works independently with minimal supervision, capable of making recommendations. Demonstrate a solid ability to tell a story with simplistic views of complex datasets. Deliver data reliability, efficiency, and best-in-class data governance, ensuring security and compliance. Will be an integral part of developing best-in-class solution for the GAV organization. Build dashboard and reporting proofs of concept as needed; develop reporting and analysis templates and tools. Work in close collaboration with business teams throughout MASPA (Market Access Strategy Pricing and Analytics) to determine tool functionality/configuration and data requirements to ensure that the analytic capability is supporting the most current business needs. Partner with the Digital Client Partners to align on priorities, processes and governance and ensure experimentation to activate innovation and pipeline value. Must - have Qualifications: Bachelor’s degree in computer science, Software Engineering, or engineering related area. 5+ years of relevant experience emphasizing data modelling, development, or systems engineering. 1+ years with a data visualization tool (e.g. Tableau, Power BI). 2+ Years of experience in any number of the following tools, languages, and databases: (e.g. MySQL, SQL, Aurora DB, Redshift, Snowflake). Demonstrated capabilities in integrating and analysing heterogeneous datasets; Ability to identify trends, identify outliers and find patterns. Demonstrated expertise and capabilities in matrixed, cross-functional teams and influencing without authority. Proven experience and demonstrated skills with AWS services, Tableau, Airflow, Python and Dataiku. Must be experienced in DevSecOps tools JIRA, GitHub. Experience in Database design tools. Deep knowledge of Agile methodologies and SDLC processes. Excellent written, interpersonal, and oral communication skills, communicate and liaise broadly across functions and the global organization. Strong analytical, critical thinking, and troubleshooting skills. Ambition to learn and utilize emerging technologies while working in a stimulating team environment. Nice-to-Have: Advanced degree in Computer Engineering, Computer Science, Information Systems or related discipline. Knowledge with GenAI and LLMs framework (OpenAI, AWS). US Market Access functional knowledge and data literacy. Statistical analysis to understand and improve possible limitations in models. Experience in AI/ML frameworks. Pytest and CI/CD tools. Experience in UI/UX design. Experience in solution architecture & product engineering. Manage, Master, and Maximize with MResult MResult is an equal-opportunity employer committed to building an inclusive environment free of discrimination and harassment. Take the next step in your career with MResult — where your ideas help shape the future. Show more Show less
Posted 2 weeks ago
8.0 years
0 Lacs
Greater Kolkata Area
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Manager Job Description & Summary A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Creating business intelligence from data requires an understanding of the business, the data, and the technology used to store and analyse that data. Using our Rapid Business Intelligence Solutions, data visualisation and integrated reporting dashboards, we can deliver agile, highly interactive reporting and analytics that help our clients to more effectively run their business and understand what business questions can be answered and how to unlock the answers. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities Job Description : We are seeking skilled and dynamic Data Engineers specializing in AWS, Azure, Databricks, and GCP. The ideal candidate will have a strong background in data engineering, with a focus on data ingestion, transformation, and warehousing. They should also possess excellent knowledge of PySpark or Spark, and a proven ability to optimize performance in Spark job executions. Key Responsibilities Design, build, and maintain scalable data pipelines for a variety of cloud platforms including AWS, Azure, Databricks, and GCP. Implement data ingestion and transformation processes to facilitate efficient data warehousing. Utilize cloud services to enhance data processing capabilities: AWS: Glue, Athena, Lambda, Redshift, Step Functions, DynamoDB, SNS. Azure: Data Factory, Synapse Analytics, Functions, Cosmos DB, Event Grid, Logic Apps, Service Bus. GCP: Dataflow, BigQuery, DataProc, Cloud Functions, Bigtable, Pub/Sub, Data Fusion. Optimize Spark job performance to ensure high efficiency and reliability. Stay proactive in learning and implementing new technologies to improve data processing frameworks. Collaborate with cross-functional teams to deliver robust data solutions. Work on Spark Streaming for real-time data processing as necessary. Qualifications 8-11 years of experience in data engineering with a strong focus on cloud environments. Proficiency in PySpark or Spark is mandatory. Proven experience with data ingestion, transformation, and data warehousing. In-depth knowledge and hands-on experience with cloud services(AWS/Azure/GCP): Demonstrated ability in performance optimization of Spark jobs. Strong problem-solving skills and the ability to work independently as well as in a team. Cloud Certification (AWS, Azure, or GCP) is a plus. Familiarity with Spark Streaming is a bonus. Mandatory Skill Sets Python, Pyspark, SQL with (AWS or Azure or GCP) Preferred Skill Sets Python, Pyspark, SQL with (AWS or Azure or GCP) Years Of Experience Required 8-11 years Education Qualification BE/BTECH, ME/MTECH, MBA, MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Natural Language Processing (NLP) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Reasoning, Analytical Thinking, Application Software, Business Data Analytics, Business Management, Business Technology, Business Transformation, Coaching and Feedback, Communication, Creativity, Documentation Development, Embracing Change, Emotional Regulation, Empathy, Implementation Research, Implementation Support, Implementing Technology, Inclusion, Intellectual Curiosity, Learning Agility, Optimism, Performance Assessment {+ 21 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date Show more Show less
Posted 2 weeks ago
3.0 years
0 Lacs
Greater Hyderabad Area
On-site
Description AOP team within Amazon Transportation is looking for an innovative, hands-on and customer-obsessed Business Intelligence Engineer for Analytics team. Candidate must be detail oriented, have superior verbal and written communication skills, strong organizational skills, excellent technical skills and should be able to juggle multiple tasks at once. Ideal candidate must be able to identify problems before they happen and implement solutions that detect and prevent outages. The candidate must be able to accurately prioritize projects, make sound judgments, work to improve the customer experience and get the right things done. This job requires you to constantly hit the ground running and have the ability to learn quickly. Primary responsibilities include defining the problem and building analytical frameworks to help the operations to streamline the process, identifying gaps in the existing process by analyzing data and liaising with relevant team(s) to plug it and analyzing data and metrics and sharing update with the internal teams. Key job responsibilities Apply multi-domain/process expertise in day to day activities and own end to end roadmap. Translate complex or ambiguous business problem statements into analysis requirements and maintain high bar throughout the execution. Define analytical approach; review and vet analytical approach with stakeholders. Proactively and independently work with stakeholders to construct use cases and associated standardized outputs Scale data processes and reports; write queries that clients can update themselves; lead work with data engineering for full-scale automation Have a working knowledge of the data available or needed by the wider business for more complex or comparative analysis Work with a variety of data sources and Pull data using efficient query development that requires less post processing (e.g., Window functions, virt usage) When needed, pull data from multiple similar sources to triangulate on data fidelity Actively manage the timeline and deliverables of projects, focusing on interactions in the team Provide program communications to stakeholders Communicate roadblocks to stakeholders and propose solutions Represent team on medium-size analytical projects in own organization and effectively communicate across teams [January 21, 2025, 1:30 PM] Dhingra, Gunjit: Day in life A day in the life Solve ambiguous analyses with less well-defined inputs and outputs; drive to the heart of the problem and identify root causes Have the capability to handle large data sets in analysis through the use of additional tools Derive recommendations from analysis that significantly impact a department, create new processes, or change existing processes Understand the basics of test and control comparison; may provide insights through basic statistical measures such as hypothesis testing Identify and implement optimal communication mechanisms based on the data set and the stakeholders involved Communicate complex analytical insights and business implications effectively About The Team AOP (Analytics Operations and Programs) team is missioned to standardize BI and analytics capabilities, and reduce repeat analytics/reporting/BI workload for operations across IN, AU, BR, MX, SG, AE, EG, SA marketplace. AOP is responsible to provide visibility on operations performance and implement programs to improve network efficiency and defect reduction. The team has a diverse mix of strong engineers, Analysts and Scientists who champion customer obsession. We enable operations to make data-driven decisions through developing near real-time dashboards, self-serve dive-deep capabilities and building advanced analytics capabilities. We identify and implement data-driven metric improvement programs in collaboration (co-owning) with Operations teams. Basic Qualifications 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with data modeling, warehousing and building ETL pipelines Experience in Statistical Analysis packages such as R, SAS and Matlab Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling Preferred Qualifications Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ASSPL - Karnataka - B56 Job ID: A2877941 Show more Show less
Posted 2 weeks ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Description Organization Overview In Kindle Direct Publishing Risk Ops, authors are at the heart of what we do. It's that simple. We have a great team that thrives on innovation and are passionate about creating an exceptional customer experience. If you want to delight customers and solve problems through data, the Business Intelligence Engineer (BIE) is the job for you! Position Overview As a Business Intelligence Engineer you will be leading the charge in developing data modeling strategies and design decisions. You will work with a wide range of data technologies (e.g. Redshift, Lambda, and Tableau) and stay up to date on emerging technologies, investigating and implementing where appropriate. This role requires an individual with proficiency in analytical abilities, deep knowledge of business intelligence solutions, and the ability to work with technology and business teams. The successful candidate will have passion for data and analytics, be a self-starter comfortable with ambiguity, with strong attention to detail, an ability to work in a fast-paced and entrepreneurial environment, and driven by a desire to innovate Amazon’s approach to this space. The key ways this role makes a difference in the Risk Ops organization are: Develop complex queries for ad hoc requests and projects, as well as ongoing reporting. Design, develop and maintain scalable, automated, user-friendly systems, reports, dashboards, etc. that will support our analytical and business needs Design, implement, and support key datasets that provide structured and timely access to actionable business information addressing stakeholder needs. Interface directly with stakeholders, gathering requirements and owning automated end-to-end reporting solutions. Partner with analysts, data engineers, business intelligence engineers, and software development engineers across Books org and Amazon to produce complete data solutions. Simplify and automate reporting, audits, and other data-driven activities; build solutions to have maximum scale and self-service ability by stakeholders Proficiency with SQL queries to retrieve and analyze data. Learn and understand a broad range of Amazon’s data resources and know how, when, and which to use. Basic Qualifications 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with data modeling, warehousing and building ETL pipelines Experience writing complex SQL queries Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling Preferred Qualifications Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI MAA 15 SEZ - K20 Job ID: A2937850 Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Description Amazon.in ReCommerce team is seeking a talented, self-driven and experienced Business Intelligence Engineer to lead analytics for their business. This pivotal role provides you opportunity to work with an exceptionally innovative and talented business, product and tech teams who are spread across geographies. You should be passionate about designing, building and deploying mid-size to large BI solutions for difficult and/or loosely structured problems. You will build complex data models to provide data driven insights for high impact decisions and to highlight new opportunities. Key job responsibilities Interfacing with business customers, gathering requirements and delivering end to end BI solutions, which includes defining success metrics, complex analytical deep dives, data modelling, and building scalable automation with reports and dashboards. Partner with Tech, Product and Program teams during conceptualization stage and provide valuable inputs to build robust data aggregation platforms. Support leadership reviews, Business Reviews, 1/3/5 Year planning exercises for the group, Manage sprint planning for basic/ advance analysis requests across stakeholders. Interfacing with multiple technology and partner teams to extract, transform, and load data from a wide variety of data sources and ability to use a programming and/or scripting language to process data for analysis and modeling Evolve organization wide Self-Service platforms. Candidate should have basic understanding of a scripting language. Knows how to model data and design a data pipeline. Able to apply basic statistical methods (e.g. regression) for difficult business problems. A day in the life The ideal candidate relishes working with large volumes of data, enjoys the challenge of highly complex business contexts and above all else, is passionate about data and analytics. The candidate is an expert with business intelligence tools and passionately partners with multiple Tech and Business to identify strategic opportunities for data-backed insights to drive value creation. An effective communicator, the candidate crisply translates analysis result into executive-facing business terms. The candidate works aptly with internal and external teams to push the projects across the finishing line. The candidate is a self-starter, comfortable with ambiguity, able to think big (while paying careful attention to detail), and enjoys working in a fast-paced team. Basic Qualifications 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with data modeling, warehousing and building ETL pipelines Experience writing complex SQL queries Experience in Statistical Analysis packages such as R, SAS and Matlab Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling Preferred Qualifications Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ASSPL - Karnataka Job ID: A2957845 Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Role Title - Team Lead and Lead Developer – Backend and Database (Node) Role Type - Full time Role Reports to Chief Technology Officer Category - Regular / Fixed Term Job location - 8 th floor, E Block, IITM Research Park, Taramani Job Overview We're seeking an experienced Senior Backend and Database Developer and Team Lead for our backend team. The ideal candidate will combine technical expertise in full-stack development with extensive experience in backend development, with strong process optimization skills and innovative thinking to drive team efficiency and product quality. Job Specifications Educational Qualifications - Any UG/PG graduates Experience - 5+ years Key Job Responsibilities Software architecture design Architect and oversee development of backend in Node Familiarity with MVC and design patterns and have a strong grasp of data structures Basic database theory – ACID vs eventually consistent, OLTP vs OLAP Different types of databases - relational stores, K/V stores, text stores, graph DBs, vector DBs, time series DBs Database design & structures Experience with data modeling concepts including normalization, normal forms, star schema (management and evolution), and dimensional modeling Expertise in SQL DBs (MySQL, PostgreSQL), and NoSQL DBs (MongoDB, Redis) Data pipeline design based on operational principles. Dealing with failures, restarts, reruns, pipeline changes, and various file storage formats Backend & API frameworks & other services To develop and maintain RESTful, JSON RPC and other APIs for various applications Understanding of backend JS frameworks such as Express.js, NestJS , and documentation tools like Postman, Swagger Experience with callbacks with Webhooks, callbacks and other event-driven systems, and third-party solution integrations (Firebase, Google Maps, Amplify and others) QA and testing Automation testing and tooling knowledge for application functionality validation and QA Experience with testing routines and fixes with various testing tools (JMeter, Artillery or others) Load balancers, caching and serving Experience with event serving (Apache Kafka and others), caching and processing (Redux, Apache Spark or other frameworks) and scaling (Kubernetes and other systems) Experience with orchestrators like Airflow for huge data workloads, Scripting and automation for various purposes including scheduling and logging Production, Deployment & Monitoring Experience of CI/CD pipelines with tools like Jenkins/Circle CI, and Docker for containerization Experience in deployment and monitoring of apps on cloud platforms e.g., AWS, Azure and bare metal configurations Documentation, version control and ticketing Version control with Git, and ticketing bugs and features with tools like Jira or Confluence Backend documentation and referencing with tools like Swagger, Postman Experience in creating ERDs for various data types and models and documentation of evolving models Behavioral competencies Attention to detail Ability to maintain accuracy and precision in financial records, reports, and analysis, ensuring compliance with accounting standards and regulations. Integrity and Ethics Commitment to upholding ethical standards, confidentiality, and honesty in financial practices and interactions with stakeholders. Time management Effective prioritization of tasks, efficient allocation of resources, and timely completion of assignments to meet sprint deadlines and achieve goals. Adaptability and Flexibility Capacity to adapt to changing business environments, new technologies, and evolving accounting standards, while remaining flexible in response to unexpected challenges. Communication & collaboration Experience presenting to stakeholders and executive teams Ability to bridge technical and non-technical communication Excellence in written documentation and process guidelines to work with other frontend teams Leadership competencies Team leadership and team building Lead and mentor a backend and database development team, including junior developers, and ensure good coding standards Agile methodology to be followed, Scrum meetings to be conducted for sync-ups Strategic Thinking Ability to develop and implement long-term goals and strategies aligned with the organization’s vision Ability to adopt new tech and being able to handle tech debt to bring the team up to speed with client requirements Decision-Making Capable of making informed and effective decisions, considering both short-term and long-term impacts Insight into resource allocation and sprint building for various projects Team Building Ability to foster a collaborative and inclusive team environment, promoting trust and cooperation among team members Code reviews Troubleshooting, weekly code reviews and feature documentation and versioning, and standards improvement Improving team efficiency Research and integrate AI-powered development tools (GitHub Copilot, Amazon CodeWhisperer) Added advantage points AI/ML applications Experience in AI/ML application backend workflows (e.g: MLFlow) and serving the models Data processing & maintenance Familiarity with at least one data processing platform (e.g., Spark, Flink, Beam/Google Dataflow, AWS Batch) Experience with Elasticsearch and other client-side data processing frameworks Understand data management and analytics – with metadata catalogs (e.g., AWS Glue), data warehousing (e.g., AWS Redshift) Data governance Quality control, policies around data duplication, definitions, company-wide processes around security and privacy Interested candidates can share the updated resumes to below mentioned ID. Contact Person - Janani Santhosh Senior HR Executive Email Id - careers@plenome.com Show more Show less
Posted 2 weeks ago
2.0 years
0 Lacs
Hyderābād
On-site
- 2+ years of processing data with a massively parallel technology (such as Redshift, Teradata, Netezza, Spark or Hadoop based big data solution) experience - 2+ years of relational database technology (such as Redshift, Oracle, MySQL or MS SQL) experience - 2+ years of developing and operating large-scale data structures for business intelligence analytics (using ETL/ELT processes) experience - 5+ years of data engineering experience - Experience managing a data or BI team - Experience communicating to senior management and customers verbally and in writing - Experience leading and influencing the data or BI strategy of your team or organization - Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS Do you pioneer? Do you enjoy solving complex problems in building and analyzing large datasets? Do you enjoy focusing first on your customer and working backwards? Amazon transportation controllership team is looking for an experienced Data Engineering Manager with experience in architecting large/complex data systems with a strong record of achieving results, scoping and delivering large projects end-to-end. You will be the key driver in building out our vision for scalable data systems to support the ever-growing Amazon global transportation network businesses. Key job responsibilities As a Data Engineering Manager in Transportation Controllership, you will be at the forefront of managing large projects, providing vision to the team, designing and planning large financial data systems that will allow our businesses to scale world-wide. You should have deep expertise in the database design, management, and business use of extremely large datasets, including using AWS technologies - Redshift, S3, EC2, Data-pipeline and other big data technologies. Above all you should be passionate about data warehousing large datasets together to answer business questions and drive change. You should have excellent business acumen and communication skills to be able to work with multiple business teams, and be comfortable communicating with senior leadership. Due to the breadth of the areas of business, you will coordinate across many internal and external teams, and provide visibility to the senior leaders of the company with your strong written and oral communication skills. We need individuals with demonstrated ability to learn quickly, think big, execute both strategically and tactically, motivate and mentor their team to deliver business values to our customers on time. A day in the life On a daily basis you will: • manage and help GROW a team of high performing engineers • understand new business requirements and architect data engineering solutions for the same • plan your team's priorities, working with relevant internal/external stakeholders, including sprint planning • resolve impediments faced by the team • update leadership as needed • use judgement in making the right tactical and strategic decisions for the team and organization • monitor health of the databases and ingestion pipelines Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Experience with AWS Tools and Technologies (Redshift, S3, EC2) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 2 weeks ago
4.0 years
2 - 6 Lacs
Gurgaon
On-site
Key Responsibilities: Understand and analyze ETL requirements, data mapping documents, and business rules. Design, develop, and execute test cases, test scripts, and test plans for ETL processes. Perform data validation, source-to-target data mapping, and data integrity checks. Write complex SQL queries for data verification and backend testing. Conduct regression, integration, and system testing for ETL pipelines and data warehouse environments. Work with BI tools to validate reports and dashboards if applicable. Collaborate with developers, business analysts, and data engineers to ensure testing coverage and resolve issues. Document defects, test results, and provide detailed bug reports and testing status. Required Skills and Experience: 4+ years of experience in ETL testing or data warehouse testing. Strong proficiency in SQL for data validation and analysis. Hands-on experience with ETL tools like Informatica, Talend, SSIS, or similar. Knowledge of data warehousing concepts, star/snowflake schemas, and data modeling. Experience with test management tools (e.g., JIRA, HP ALM, TestRail). Understanding of automation in data testing (a plus, e.g., Python, Selenium with databases). Familiarity with cloud platforms (e.g., AWS Redshift, Google BigQuery, Azure Data Factory) is a plus Job Type: Full-time Work Location: In person
Posted 2 weeks ago
10.0 years
0 Lacs
Gurgaon
On-site
Who We Are BCG partners with clients from the private, public, and not‐for profit sectors in all regions of the globe to identify their highest value opportunities, address their most critical challenges, and transform their enterprises. We work with the most innovative companies globally, many of which rank among the world’s 500 largest corporations. Our global presence makes us one of only a few firms that can deliver a truly unified team for our clients – no matter where they are located. Our ~22,000 employees, located in 90+ offices in 50+ countries, enable us to work in collaboration with our clients, to tailor our solutions to each organization. We value and utilize the unique talents that each of these individuals brings to BCG; the wide variety of backgrounds of our consultants, specialists, and internal staff reflects the importance we place on diversity. Our employees hold degrees across a full range of disciplines – from business administration and economics to biochemistry, engineering, computer science, psychology, medicine, and law. What You'll Do BCG X develops innovative and AI driven solutions for the Fortune 500 in their highest‐value use cases. The BCG X Software group productizes repeat use‐cases, creating both reusable components as well as single‐tenant and multi‐tenant SaaS offerings that are commercialized through the BCG consulting business. BCG X is currently looking for a Software Engineering Architect to drive impact and change for the firms engineering and analytics engine and bring new products to BCG clients globally. This will include: Serving as a leader within BCG X and specifically the KEY Impact Management by BCG X Tribe (Transformation, Post-Merger-Integration related software and data products) overseeing the delivery of high-quality software: driving technical roadmap, architectural decisions and mentoring engineers Influencing and serving as a key decision maker in BCG X technology selection & strategy Active “hands-on” role, building intelligent analytical products to solve problems, write elegant code, and iterate quickly Overall responsibility for the engineering and architecture alignment of all solutions delivered within the tribe. Responsible for technology roadmap of existing and new components delivered. Architecting and implementing backend and frontend solutions primarily using .NET, C#, MS SQL Server, Angular, and other technologies best suited for the goals, including open source i.e. Node, Django, Flask, Python where needed. What You'll Bring 10+ years of technology and software engineering experience in a complex and fast-paced business environment (ideally agile environment) with exposure to a variety of technologies and solutions, with at least 5 year’ experience in Architect role. Experience with a wide range of Application and Data architectures, platforms and tools including: Service Oriented Architecture, Clean Architecture, Software as a Service, Web Services, Object-Oriented Languages (like C# or Java), SQL Databases (like Oracle or SQL Server), Relational, Non-relational Databases, Hands on experience with analytics tools and reporting tools, Data Science experience etc. Thoroughly up to date in technology: o Modern cloud architectures including AWS, Azure, GCP, Kubernetes o Very strong particularly in .NET, C#, MS SQL Server, Angular technologies o Open source stacks including NodeJs, React, Angular, Flask are good to have o CI/CD / DevSecOps / GitOps toolchains and development approaches o Knowledge in machine learning & AI frameworks o Big data pipelines and systems: Spark, Snowflake, Kafka, Redshift, Synapse, Airflow At least Bachelors degree; Master’s degree and/or MBA preferred Team player with excellent work habits and interpersonal skills Care deeply about product quality, reliability, and scalability Passion about the people and culture side of engineering teams Outstanding written and oral communications skills The ability to travel, depending on project requirements. Boston Consulting Group is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, age, religion, sex, sexual orientation, gender identity / expression, national origin, disability, protected veteran status, or any other characteristic protected under national, provincial, or local law, where applicable, and those with criminal histories will be considered in a manner consistent with applicable state and local laws. BCG is an E - Verify Employer. Click here for more information on E-Verify.
Posted 2 weeks ago
5.0 years
0 - 0 Lacs
India
On-site
Company Introduction: - A dynamic company headquartered in Australia. Multi awards winner, recognized for excellence in telecommunications industry. Financial Times Fastest-growing Company APAC 2023. AFR (Australian Financial Review) Fast 100 Company 2022. Great promotion opportunities that acknowledge and reward your hard work. Young, energetic and innovative team, caring and supportive work environment. About You: We are seeking an experienced and highly skilled Data Warehouse Engineer to join our data and analytics team. Data Warehouse Engineer with an energetic 'can do' attitude to be a part of our dynamic IT team. The ideal candidate will have over 5 years of hands-on experience in designing, building, and maintaining scalable data pipelines and reporting infrastructure. You will be responsible for managing our data warehouse, automating ETL workflows, building dashboards, and enabling data-driven decision-making across the organization. Your Responsibilities will include but is not limited to: • Design, implement, and maintain robust, scalable data pipelines using Apache NiFi, Airflow, or similar ETL tools. Develop and manage efficient data ingestion and transformation workflows, including web data crawling using Python. Create, optimize, and maintain complex SQL queries to support business reporting needs. Build and manage interactive dashboards and visualizations using Apache Superset (preferred), Power BI, or Tableau. Collaborate with business stakeholders and analysts to gather requirements, define KPIs, and deliver meaningful data insights. Ensure data accuracy, completeness, and consistency through rigorous quality assurance processes. Maintain and optimize the performance of the data warehouse, supporting high-availability and fast query response times. Document technical processes and data workflows for maintainability and scalability. To be successful in this role you will ideally possess: 5+ years of experience in data engineering, business intelligence, or a similar role. Strong proficiency in Python, particularly for data crawling, parsing, and automation tasks. Expert in SQL (including complex joins, CTEs, window functions) for reporting and analytics. Hands-on experience with Apache Superset (preferred), or equivalent BI tools like Power BI or Tableau. Proficient with ETL tools such as Apache NiFi, Airflow, or similar data pipeline frameworks. Experience working with cloud-based data warehouse platforms (e.g., Amazon Redshift, Snowflake, BigQuery, or PostgreSQL). Strong understanding of data modeling, warehousing concepts, and performance optimization. Ability to work independently and collaboratively in a fast-paced environment. Preferred Qualifications: Experience with version control (e.g., Git) and CI/CD processes for data workflows. Familiarity with REST APIs and web scraping best practices. Knowledge of data governance, privacy, and security best practices. Background in the telecommunications or ISP industry is a plus. Job Types: Full-time, Permanent Pay: ₹40,000.00 - ₹70,000.00 per month Benefits: Leave encashment Paid sick time Provident Fund Schedule: Day shift Monday to Friday Supplemental Pay: Overtime pay Yearly bonus Work Location: In person
Posted 2 weeks ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description Analytics Engineer We are seeking a talented, motivated and self-driven professional to join the HH Digital, Data & Analytics (HHDDA) organization and play an active role in Human Health transformation journey to become the premier “Data First” commercial biopharma organization. As a Analytics Engineer, you will be part of the HHDDA Commercial Data Solutions team, providing technical/data expertise development of analytical data products to enable data science & analytics use cases. In this role, you will create and maintain data assets/domains used in the commercial/marketing analytics space – to develop best-in-class data pipelines and products, working closely with data product owners to translate data product requirements and user stories into development activities throughout all phases of design, planning, execution, testing, deployment and delivery. Your specific responsibilities will include Hands-on development of last-mile data products using the most up-to-date technologies and software / data / DevOps engineering practices Enable data science & analytics teams to drive data modeling and feature engineering activities aligned with business questions and utilizing datasets in an optimal way Develop deep domain expertise and business acumen to ensure that all specificalities and pitfalls of data sources are accounted for Build data products based on automated data models, aligned with use case requirements, and advise data scientists, analysts and visualization developers on how to use these data models Develop analytical data products for reusability, governance and compliance by design Align with organization strategy and implement semantic layer for analytics data products Support data stewards and other engineers in maintaining data catalogs, data quality measures and governance frameworks Education B.Tech / B.S., M.Tech / M.S. or PhD in Engineering, Computer Science, Engineering, Pharmaceuticals, Healthcare, Data Science, Business, or related field Required Experience 5+ years of relevant work experience in the pharmaceutical/life sciences industry, with demonstrated hands-on experience in analyzing, modeling and extracting insights from commercial/marketing analytics datasets (specifically, real-world datasets) High proficiency in SQL, Python and AWS Good understanding and comprehension of the requirements provided by Data Product Owner and Lead Analytics Engineer Experience creating / adopting data models to meet requirements from Marketing, Data Science, Visualization stakeholders Experience with including feature engineering Experience with cloud-based (AWS / GCP / Azure) data management platforms and typical storage/compute services (Databricks, Snowflake, Redshift, etc.) Experience with modern data stack tools such as Matillion, Starburst, ThoughtSpot and low-code tools (e.g. Dataiku) Excellent interpersonal and communication skills, with the ability to quickly establish productive working relationships with a variety of stakeholders Experience in analytics use cases of pharmaceutical products and vaccines Experience in market analytics and related use cases Preferred Experience Experience in analytics use cases focused on informing marketing strategies and commercial execution of pharmaceutical products and vaccines Experience with Agile ways of working, leading or working as part of scrum teams Certifications in AWS and/or modern data technologies Knowledge of the commercial/marketing analytics data landscape and key data sources/vendors Experience in building data models for data science and visualization/reporting products, in collaboration with data scientists, report developers and business stakeholders Experience with data visualization technologies (e.g, PowerBI) Our Human Health Division maintains a “patient first, profits later” ideology. The organization is comprised of sales, marketing, market access, digital analytics and commercial professionals who are passionate about their role in bringing our medicines to our customers worldwide. We are proud to be a company that embraces the value of bringing diverse, talented, and committed people together. The fastest way to breakthrough innovation is when diverse ideas come together in an inclusive environment. We encourage our colleagues to respectfully challenge one another’s thinking and approach problems collectively. We are an equal opportunity employer, committed to fostering an inclusive and diverse workplace. Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status Regular Relocation VISA Sponsorship Travel Requirements Flexible Work Arrangements Hybrid Shift Valid Driving License Hazardous Material(s) Required Skills Business Intelligence (BI), Data Management, Data Modeling, Data Visualization, Measurement Analysis, Stakeholder Relationship Management, Waterfall Model Preferred Skills Job Posting End Date 07/31/2025 A job posting is effective until 11 59 59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID R335374 Show more Show less
Posted 2 weeks ago
5.0 years
1 - 10 Lacs
Noida
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: Support the full data engineering lifecycle including research, proof of concepts, design, development, testing, deployment, and maintenance of data management solutions Utilize knowledge of various data management technologies to drive data engineering projects Working with Operations and Product Development staff to support applications/processes to facilitate the effective and efficient implementation/migration of new clients' healthcare data through the Optum Impact Product Suite Lead data acquisition efforts to gather data from various structured or semi-structured source systems of record to hydrate client data warehouse and power analytics across numerous health care domains Leverage combination of ETL/ELT methodologies to pull complex relational and dimensional data to support loading DataMart’s and reporting aggregates Eliminate unwarranted complexity and unneeded interdependencies Detect data quality issues, identify root causes, implement fixes, and manage data audits to mitigate data challenges Implement, modify, and maintain data integration efforts that improve data efficiency, reliability, and value Leverage and facilitate the evolution of best practices for data acquisition, transformation, storage, and aggregation that solve current challenges and reduce the risk of future challenges Effectively create data transformations that address business requirements and other constraints Partner with the broader analytics organization to make recommendations for changes to data systems and the architecture of data platforms Prepare high level design documents and detailed technical design documents with best practices to enable efficient data ingestion, transformation and data movement Leverage DevOps tools to enable code versioning and code deployment Leverage data pipeline monitoring tools to detect data integrity issues before they result into user visible outages or data quality issues Leverage processes and diagnostics tools to troubleshoot, maintain and optimize solutions and respond to customer and production issues Continuously support technical debt reduction, process transformation, and overall optimization Leverage and contribute to the evolution of standards for high quality documentation of data definitions, transformations, and processes to ensure data transparency, governance, and security Ensure that all solutions meet the business needs and requirements for security, scalability, and reliability Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Bachelor’s Degree (preferably in information technology, engineering, math, computer science, analytics, engineering or other related field) 5+ years of combined experience in data engineering, ingestion, normalization, transformation, aggregation, structuring, and storage 5+ years of combined experience working with industry standard relational, dimensional or non-relational data storage systems 5+ years of experience in designing ETL/ELT solutions using tools like Informatica, DataStage, SSIS , PL/SQL, T-SQL, etc. 5+ years of experience in managing data assets using SQL, Python, Scala, VB.NET or other similar querying/coding language 3+ years of experience working with healthcare data or data to support healthcare organizations Preferred Qualifications: 5+ years of experience in creating Source to Target Mappings and ETL design for integration of new/modified data streams into the data warehouse/data marts Experience in Unix or Powershell or other batch scripting languages Experience supporting data pipelines that power analytical content within common reporting and business intelligence platforms (e.g. Power BI, Qlik, Tableau, MicroStrategy, etc.) Experience supporting analytical capabilities inclusive of reporting, dashboards, extracts, BI tools, analytical web applications and other similar products Experience contributing to cross-functional efforts with proven success in creating healthcare insights Experience and credibility interacting with analytics and technology leadership teams Depth of experience and proven track record creating and maintaining sophisticated data frameworks for healthcare organizations Exposure to Azure, AWS, or google cloud ecosystems Exposure to Amazon Redshift, Amazon S3, Hadoop HDFS, Azure Blob, or similar big data storage and management components Demonstrated desire to continuously learn and seek new options and approaches to business challenges Willingness to leverage best practices, share knowledge, and improve the collective work of the team Demonstrated ability to effectively communicate concepts verbally and in writing Demonstrated awareness of when to appropriately escalate issues/risks Demonstrated excellent communication skills, both written and verbal At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone–of every race, gender, sexuality, age, location and income–deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Posted 2 weeks ago
6.0 years
8 - 10 Lacs
Noida
On-site
Data Sciences Assistant Manager Full-time Company Description About Sopra Steria Sopra Steria, a major Tech player in Europe with 50,000 employees in nearly 30 countries, is recognised for its consulting, digital services and solutions. It helps its clients drive their digital transformation and obtain tangible and sustainable benefits. The Group provides end-to-end solutions to make large companies and organisations more competitive by combining in-depth knowledge of a wide range of business sectors and innovative technologies with a collaborative approach. Sopra Steria places people at the heart of everything it does and is committed to putting digital to work for its clients in order to build a positive future for all. In 2024, the Group generated revenues of €5.8 billion. The world is how we shape it. Job Description BI Solutioning & Data Engineering Design, build, and manage end-to-end Business Intelligence solutions, integrating structured and unstructured data from internal and external sources. Architect and maintain scalable data pipelines using cloud-native services (e.g., AWS, Azure, GCP). Implement ETL/ELT processes to ensure data quality, transformation, and availability for analytics and reporting. Market Intelligence & Analytics Enablement Support the Market Intelligence team by building dashboards, visualizations, and data models that reflect competitive, market, and customer insights. Work with research analysts to convert qualitative insights into measurable datasets. Drive the automation of insight delivery, enabling real-time or near real-time updates. Visualization & Reporting Design interactive dashboards and executive-level visual reports using tools such as Power BI, or Tableau. Maintain data storytelling standards to deliver clear, compelling narratives aligned with strategic objectives. Stakeholder Collaboration Act as a key liaison between business users, strategy teams, research analysts, and IT/cloud engineering. Translate analytical and research needs into scalable, sustainable BI solutions. Educate internal stakeholders on the capabilities of BI platforms and insights delivery pipelines Preferred: Cloud Infrastructure & Data Integration Collaborate with cloud engineering teams to deploy BI tools and data lakes in a cloud environment. Ensure data warehousing architecture is aligned with market research and analytics needs. Optimize data models and storage for scalability, performance, and security. Total Experience Expected: 06-09 years Qualifications Must Bachelor’s/Master’s degree in Computer Science, Data Science, Business Analytics, or a related technical field. 6+ years of experience in Business Intelligence, Data Engineering, or Cloud Data Analytics. Proficiency in SQL, Python, or data wrangling languages. Deep knowledge of BI tools like Power BI, Tableau, or QlikView. Strong data modeling, ETL, and data governance capabilities. Preferred Solid understanding of cloud platforms (AWS, Azure, GCP), with hands-on experience in cloud-based data warehouses (e.g., Snowflake, Redshift, BigQuery) Exposure to market intelligence, competitive analysis, or strategic analytics is highly desirable. Excellent communication, stakeholder management, and visualization/storytelling skills. Additional Information At our organization, we are committed to fighting against all forms of discrimination. We foster a work environment that is inclusive and respectful of all differences. All of our positions are open to people with disabilities.
Posted 2 weeks ago
6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Spectral Consultants is hiring for a US-based management consulting firm. Location:- Gurugram, Pune Work Mode:- Hybrid Experience:- 6-10years What You’ll Do: Lead end-to-end project delivery (design → deployment) Architect solutions using AWS/Azure (EMR, Glue, Redshift, etc.) Guide & mentor technical teams Interface with senior stakeholders Work in Agile environments What You Bring: 5+ years in tech consulting or solution delivery Strong in Python/Scala, cloud platforms (AWS/Azure) Hands-on experience in data engineering & distributed systems Excellent communication & leadership skills Show more Show less
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The job market for redshift professionals in India is growing rapidly as more companies adopt cloud data warehousing solutions. Redshift, a powerful data warehouse service provided by Amazon Web Services, is in high demand due to its scalability, performance, and cost-effectiveness. Job seekers with expertise in redshift can find a plethora of opportunities in various industries across the country.
The average salary range for redshift professionals in India varies based on experience and location. Entry-level positions can expect a salary in the range of INR 6-10 lakhs per annum, while experienced professionals can earn upwards of INR 20 lakhs per annum.
In the field of redshift, a typical career path may include roles such as: - Junior Developer - Data Engineer - Senior Data Engineer - Tech Lead - Data Architect
Apart from expertise in redshift, proficiency in the following skills can be beneficial: - SQL - ETL Tools - Data Modeling - Cloud Computing (AWS) - Python/R Programming
As the demand for redshift professionals continues to rise in India, job seekers should focus on honing their skills and knowledge in this area to stay competitive in the job market. By preparing thoroughly and showcasing their expertise, candidates can secure rewarding opportunities in this fast-growing field. Good luck with your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.