Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 - 10.0 years
0 Lacs
haryana
On-site
You will be responsible for leading and managing the delivery of projects as well as achieving project and team goals. Your tasks will include building and supporting data ingestion and processing pipelines, designing and maintaining machine learning infrastructure, and leading client engagement on technical projects. You will define project scopes, track progress, and allocate work to the team. It will be essential to stay updated on big data technologies and conduct pilots to design scalable data architecture. Collaboration with software engineering teams to drive multi-functional projects to completion will also be a key aspect of your role. To excel in this position, we expect you to have a minimum of 6 years of experience in data engineering with at least 2 years in a leadership role. Experience working with global teams and remote clients is required. Hands-on experience in building data pipelines across various infrastructures, knowledge of statistical and machine learning techniques, and the ability to integrate machine learning into data pipelines are essential. Proficiency in advanced SQL, data warehousing concepts, and DataMart designing is necessary. Strong familiarity with modern data platform components like Spark and Python, as well as experience with Data Warehouses (e.g., Google BigQuery, Redshift, Snowflake) and Data Lakes (e.g., GCS, AWS S3) is expected. Experience in setting up and maintaining data pipelines with AWS Glue, Azure Data Factory, and Google Dataflow, along with relational SQL and NoSQL databases, is also required. Excellent problem-solving and communication skills are essential for this role.,
Posted 2 weeks ago
9.0 - 13.0 years
0 Lacs
haryana
On-site
You will be joining Research Partnership (part of Inizio Advisory) as a Dashboard Developer - Senior Manager based in Gurgaon, India. Research Partnership is a leading pharma market research and consulting agency with a global presence across London, Lyon, New York, Philadelphia, San Francisco, Singapore, and Delhi. The company's work focuses on making a difference to human health, celebrating progress through innovation, and putting people at the core of all activities. As part of the Data Delivery & Dashboards Team within the Data Management & Delivery division, your primary responsibility will involve leading the design, development, and delivery of impactful dashboards and visualizations. You will be leading a team of dashboard developers, ensuring alignment with stakeholder expectations, and driving innovation in dashboard solutions. Collaboration with researchers, analysts, and business leaders globally will be essential to ensure that the visual outputs provide clarity, impact, and value. Your key responsibilities will include developing interactive dashboards using tools such as PowerBI or Tableau, managing and mentoring a team of dashboard developers, translating complex project requirements into scalable dashboards, collaborating with internal stakeholders to align outputs with business needs, ensuring data accuracy and security, and staying updated on BI and visualization trends to implement improvements. To excel in this role, you should have extensive experience in BI/dashboard development and data engineering, along with significant experience in people management and team leadership. Strong engagement with senior stakeholders, a track record of delivering enterprise-grade dashboards, and a background in healthcare or market research are highly desirable qualifications. The ideal candidate for this position is a visionary thinker who can lead and inspire dashboard teams, possesses excellent communication and stakeholder management skills, has a deep understanding of data storytelling and visual best practices, and is a hands-on leader capable of driving innovation while ensuring delivery excellence. Research Partnership offers a dynamic and supportive work environment that encourages continuous learning and innovation. The company provides comprehensive training and development opportunities for all employees, including international travel and collaboration, within a relaxed and friendly setting. Research Partnership is part of Inizio Advisory, a strategic advisor to pharmaceutical and life science companies, offering market research, insights, strategy, consulting, and commercial benchmarking services. Inizio Advisory aims to support clients at every stage of the product and patient journey, creating long-term value through sector-specific solutions and intelligence. If you are passionate about the role but do not meet every job requirement, you are encouraged to apply as Research Partnership values diversity, inclusion, and authenticity in the workplace. Your unique experience and perspective may be the perfect fit for this role or other opportunities within the company.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
Wipro Limited is a leading technology services and consulting company dedicated to developing innovative solutions that cater to clients" most complex digital transformation needs. With a holistic portfolio of consulting, design, engineering, and operations capabilities, we assist clients in achieving their boldest ambitions and creating future-ready sustainable businesses. With a global presence of over 230,000 employees and business partners across 65 countries, we are committed to helping our customers, colleagues, and communities thrive in an ever-evolving world. The role's purpose is to interpret data effectively and transform it into actionable information such as reports, dashboards, and interactive visualizations that can facilitate decision-making and ultimately improve business outcomes. Key Responsibilities: - Manage the technical scope of projects in alignment with requirements at all stages by gathering information from various sources, interpreting patterns and trends, developing record management processes, and building and maintaining client relationships. - Provide sales data, proposals, data insights, and account reviews to the client base, identify areas for process efficiency and automation, set up and maintain automated data processes, and evaluate external services and tools to support data validation and cleansing. - Produce and track key performance indicators, analyze data sets, liaise with internal and external clients to understand data content, design and conduct surveys, analyze complex data sets, and prepare reports using business analytics reporting tools. - Create data dashboards, graphs, and visualizations to showcase business performance, provide sector and competitor benchmarking, mine and analyze large datasets, develop predictive models, and share insights with clients as required. Performance Parameters: - Analyze data sets and provide relevant information to clients. - Number of automations implemented, On-Time Delivery, Customer Satisfaction Score (CSAT), Zero customer escalation, Data accuracy. Mandatory Skills: Google BigQuery Experience: 5-8 Years Join Wipro to be part of a modern, end-to-end digital transformation partner with ambitious goals. We seek individuals who are inspired by reinvention and are committed to evolving themselves, their careers, and their skills. At Wipro, we embrace change as part of our DNA and aim to empower our workforce to drive continuous innovation and growth. Be a part of a purpose-driven organization that encourages you to craft your own reinvention and pursue your ambitions. We welcome applications from individuals with disabilities.,
Posted 2 weeks ago
4.0 - 8.0 years
0 Lacs
pune, maharashtra
On-site
At Medtronic, you can embark on a life-long career dedicated to exploration and innovation, all while contributing to the cause of advancing healthcare access and equity for all. Your role will be pivotal in leading with purpose to break down barriers to innovation in a more connected and compassionate world. As a PySpark Data Engineer at Medtronic's new Minimed India Hub, you will play a crucial part in designing, developing, and maintaining data pipelines using PySpark. Collaborating closely with data scientists, analysts, and other stakeholders, your responsibilities will revolve around ensuring the efficient processing and analysis of large datasets, managing complex transformations, and aggregations. This opportunity allows you to make a significant impact within Medtronic's Diabetes business. With the announcement of the intention to separate the Diabetes division to drive future growth and innovation, you will have the chance to operate with increased speed and agility. This move is expected to unlock potential and drive innovation to enhance the impact on patient care. Key Responsibilities: - Design, develop, and maintain scalable and efficient ETL pipelines using PySpark. - Collaborate with data scientists and analysts to understand data requirements and deliver high-quality datasets. - Implement data quality checks, ensure data integrity, and troubleshoot data pipeline issues. - Stay updated with the latest trends and technologies in big data and distributed computing. Required Knowledge and Experience: - Bachelor's degree in computer science, Engineering, or related field. - 4-5 years of experience in data engineering with a focus on PySpark. - Proficiency in Python and Spark, strong coding and debugging skills. - Strong knowledge of SQL and experience with relational databases. - Hands-on experience with cloud platforms such as AWS, Azure, or Google Cloud Platform. - Experience with data warehousing solutions like Redshift, Snowflake, Databricks, or Google BigQuery. - Familiarity with data lake architectures, big data technologies, and data storage solutions. - Excellent problem-solving skills and ability to troubleshoot complex issues. - Strong communication and collaboration skills. Preferred Skills: - Experience with Databricks and orchestration tools like Apache Airflow or AWS Step Functions. - Knowledge of machine learning workflows and data security best practices. - Familiarity with streaming data platforms, real-time data processing, and CI/CD pipelines. Medtronic offers a competitive Salary and flexible Benefits Package. The company values its employees and provides resources and compensation plans to support their growth at every career stage. This position is eligible for the Medtronic Incentive Plan (MIP). About Medtronic: Medtronic is a global healthcare technology leader committed to addressing the most challenging health problems facing humanity. With a mission to alleviate pain, restore health, and extend life, the company unites a team of over 95,000 passionate individuals who work tirelessly to generate real solutions for real people through engineering and innovation.,
Posted 2 weeks ago
8.0 - 13.0 years
25 - 40 Lacs
Ahmedabad
Work from Office
We are looking for a passionate and skilled Fullstack Developer to join our growing team. Youll work on building intuitive, responsive web applications and scalable backend services using modern frameworks and cloud technologies. Responsibilities : Front End : - Design and develop responsive UIs using React.js, HTML, CSS, and JavaScript - Create wireframes and mockups using tools like Figma or Canva - Implement dynamic components and visualizations using Highcharts, Material UI, and Tailwind CSS - Ensure seamless REST API integration Middleware : - Develop and maintain middleware logic using FastAPI (or similar frameworks) - Work with Python for API logic and data processing - Containerize and manage services using Docker Back End : - Build and orchestrate data pipelines using Apache Airflow, Databricks, and PySpark - Write and optimize SQL queries for data analysis and reporting - Implement basic authentication using JWT or OAuth standards Requirements : - 3+ years of experience in fullstack or frontend/backend development - Strong hands-on with React.js, JavaScript, CSS, and HTML - Experience with Python, FastAPI, and Docker - Familiarity with cloud data tools like Google BigQuery - Exposure to authentication protocols (JWT/OAuth) Preferred : - Working knowledge of Node.js - Ability to collaborate in agile and cross-functional teams
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
The Data Analytics Engineer role at Rightpoint involves being a crucial part of client projects to develop and deliver decisioning intelligence solutions. Working collaboratively with other team members and various business and technical entities on the client side is a key aspect of this role. As a member of a modern data team, your primary responsibility will be to bridge the gap between enterprise data engineers and business-focused data and visualization analysts. This involves transforming raw data into clean, organized, and reusable datasets to facilitate effective analysis and decisioning intelligence data products. Key Responsibilities: - Design, develop, and maintain clean, scalable data models to support analytics and business intelligence needs. Define rules and requirements for the data to serve business analysis objectives. - Collaborate with data analysts and business stakeholders to define data requirements, ensure data consistency across platforms, and promote self-service analytics. - Build, optimize, and document transformed pipelines into visualization and analysis environments to ensure high data quality and integrity. - Implement data transformation best practices using modern tools like dbt, SQL, and cloud data warehouses (e.g., Azure Synapse, BigQuery, Azure Databricks). - Monitor and troubleshoot data quality issues, ensuring accuracy, completeness, and reliability. - Define and maintain data quality metrics, data formats, and adopt automated methods to cleanse and improve data quality. - Optimize data performance to ensure query efficiency for large datasets. - Establish and maintain analytics platform best practices for the team, including version control, data unit testing, CI/CD, and documentation. - Collaborate with other team members, including data engineers, business and visualization analysts, and data scientists to align data assets with business analysis objectives. - Work closely with data engineering teams to integrate new data sources into the data lake and optimize performance. - Act as a consultant within cross-functional teams to understand business needs and develop appropriate data solutions. - Demonstrate strong communication skills, both written and verbal, and exhibit professionalism, conciseness, and effectiveness. - Take initiative, be proactive, anticipate needs, and complete projects comprehensively. - Exhibit a willingness to continuously learn, problem-solve, and assist others. Desired Qualifications: - Strong knowledge of SQL and Python. - Familiarity with cloud platforms like Azure, Azure Databricks, and Google BigQuery. - Understanding of schema design and data modeling methodologies. - Hands-on experience with dbt for data transformation and modeling. - Experience with version control systems like Git and CI/CD workflows. - Passion for continuous improvement, learning, and applying new technologies to everyday activities. - Ability to translate technical concepts for non-technical stakeholders. - Analytical mindset to address business challenges through data design. - Bachelor's or master's degree in computer science, Data Science, Engineering, or a related field. - Strong problem-solving skills and attention to detail. By joining Rightpoint, you will have the opportunity to work with cutting-edge business and data technologies, in a collaborative and innovative environment. Competitive salary and benefits package, along with career growth opportunities in a data-driven organization are some of the perks of working at Rightpoint. If you are passionate about data and enjoy creating efficient, scalable data solutions, we would love to hear from you! Benefits and Perks at Rightpoint include 30 Paid leaves, Public Holidays, Casual and open office environment, Flexible Work Schedule, Family medical insurance, Life insurance, Accidental Insurance, Regular Cultural & Social Events, Continuous Training, Certifications, and Learning Opportunities. Rightpoint is committed to bringing people together from diverse backgrounds and experiences to create phenomenal work, making it an inclusive and welcoming workplace for all. EEO Statement: Rightpoint is an equal opportunity employer and is committed to providing a workplace that is free from any form of discrimination.,
Posted 2 weeks ago
9.0 - 23.0 years
0 Lacs
haryana
On-site
You will be joining Research Partnership, a leading pharma market research and consulting agency, as a Dashboard Developer - Senior Manager based in Gurgaon, India. As part of the Data Delivery & Dashboards Team, your main responsibility will be to oversee the design, development, and delivery of impactful dashboards and visualizations using tools like Power BI and Tableau. You will also lead a team of developers, ensuring alignment with stakeholder expectations and driving innovation in dashboard solutions. Your role will involve translating complex project requirements into user-friendly dashboards, collaborating with internal stakeholders to meet business needs, and maintaining high standards of data accuracy, security, and responsiveness. You will stay updated on BI and visualization trends to implement improvements proactively. To excel in this role, you should have at least 10 years of experience in BI/dashboard development and data engineering, with a background in healthcare or market research being advantageous. Strong leadership and team management skills are essential, along with the ability to engage with senior stakeholders globally. You should be a visionary thinker, an excellent communicator, and have a deep understanding of data storytelling and user experience. Your technical expertise should cover backend development using PHP and frameworks like Laravel, frontend technologies including HTML, CSS, and JavaScript, database proficiency in PostgreSQL and MySQL, cloud deployment skills with AWS and Google Cloud, and experience with CI/CD tools like Jenkins and GitHub Actions. Knowledge of caching mechanisms, security protocols, and agile collaboration tools is also required. At Research Partnership, you will work in a dynamic and innovative environment that encourages continuous learning and offers opportunities for international travel and collaboration. The company values diversity and inclusivity, so even if you don't meet every job requirement, your enthusiasm and potential may make you the right fit for the role. Apply now to be a part of a team that values creativity, growth, and excellence in delivering market research solutions.,
Posted 2 weeks ago
4.0 - 8.0 years
0 Lacs
pune, maharashtra
On-site
At Medtronic, you can embark on a rewarding career dedicated to exploration and innovation, all while contributing to the advancement of healthcare access and equity for all. As a Digital Engineer at our new Minimed India Hub, you will play a crucial role in leveraging technology to enhance healthcare solutions on a global scale. Specifically, as a PySpark Data Engineer, you will be tasked with designing, developing, and maintaining data pipelines using PySpark. Your collaboration with data scientists, analysts, and stakeholders will be essential in ensuring the efficient processing and analysis of large datasets, as well as handling complex transformations and aggregations. This role offers an exciting opportunity to work within Medtronic's Diabetes business. As the Diabetes division prepares for separation to foster future growth and innovation, you will have the chance to operate with increased speed and agility. By working as a separate entity, there will be a focus on driving meaningful innovation and enhancing the impact on patient care. Your responsibilities will include designing, developing, and maintaining scalable and efficient ETL pipelines using PySpark, working with structured and unstructured data from various sources, optimizing PySpark applications for performance and scalability, collaborating with data scientists and analysts to understand data requirements, implementing data quality checks, monitoring and troubleshooting data pipeline issues, documenting technical specifications, and staying updated on the latest trends and technologies in big data and distributed computing. To excel in this role, you should possess a Bachelor's degree in computer science, engineering, or a related field, along with 4-5 years of experience in data engineering focusing on PySpark. Proficiency in Python and Spark, strong coding and debugging skills, knowledge of SQL and relational databases, hands-on experience with cloud platforms, familiarity with data warehousing solutions, experience with big data technologies, problem-solving abilities, and effective communication and collaboration skills are essential. Preferred skills include experience with Databricks, orchestration tools like Apache Airflow, knowledge of machine learning workflows, understanding of data security and governance best practices, familiarity with streaming data platforms, and knowledge of CI/CD pipelines and version control systems. Medtronic offers a competitive salary and flexible benefits package, along with a commitment to recognizing and supporting employees at every stage of their career and life. As part of the Medtronic team, you will contribute to the mission of alleviating pain, restoring health, and extending life by tackling the most challenging health problems facing humanity. Join us in engineering solutions that make a real difference in people's lives.,
Posted 2 weeks ago
2.0 - 6.0 years
0 Lacs
pune, maharashtra
On-site
Location(s): Pune About Springer Nature Group Springer Nature opens the doors to discovery for researchers, educators, clinicians and other professionals. Every day, around the globe, our imprints, books, journals, platforms and technology solutions reach millions of people. For over 180 years our brands and imprints have been a trusted source of knowledge to these communities and today, more than ever, we see it as our responsibility to ensure that fundamental knowledge can be found, verified, understood and used by our communities enabling them to improve outcomes, make progress, and benefit the generations that follow. Visit group.springernature.com and follow @SpringerNature / @SpringerNatureGroup About the Brand Springer Natures Group Publishing Operations team (GPO) performs and oversees manuscript screening operations across Springer Nature's journals portfolio. GPO also manages the various peer-review systems used by authors, editors and reviewers as part of submit-to-accept processes. Our "article-level" activities are largely supported by 300+ BPO staff across multiple locations. Our "journal-level" activities are largely supported by ~40 Springer Nature staff across multiple locations, who also contribute to projects in one of three main areas: technology, transformation and quality. About the Role In collaboration with the Manager, Automation & Analysis (as well as the rest of the Automation & Analysis team and the wider GPO team), the Data Analyst, Automation & Analysis will contribute to data and reporting activities which support BAU processes as well as strategic initiatives. This includes building dashboards, delivering advanced analytics and intelligence, as well as ensuring consistent data quality and integrity. The Data Analyst, Automation & Analysis will also manage their own portfolio of journals and partner with Publishing teams to run their journals in a Production-ready way, while facilitating submit-to-accept workflows and compliance with policies. Role Responsibilities Data Analysis - Design, build, and maintain dashboards and reporting tools using primarily Looker and Google BigQuery. - Analyse large datasets to identify trends, patterns, and actionable insights that improve operational efficiency and/or support strategic decision-making. - Contribute to the development and documentation of data standards, definitions, and best practices. - Recommend and implement KPIs, data definitions, and reporting frameworks to measure success. Journal care - Gather system requirements from publishers, editors and service providers. - Set up, maintain and administrate peer review systems. - Conduct presentations and training sessions for internal and external stakeholders. - Ensure quality of article-level activities and act as an escalation point for individual journals. - Develop and maintain relationships with stakeholders (Publishing, Editorial, Production, Technology & Operations staff, as well as external editors, reviewers, authors and service providers). - Maintain journal-specific records and documentation. Other responsibilities - Be an advocate for continuous improvement, quality and efficiency. - Foster a culture of openness, transparency and collaboration within the team and with our stakeholders. Experience, skills and qualifications - Working with large datasets to extract insights and support operational or strategic decisions. - Working with cross-functional teams and communicating technical findings to non-technical stakeholders. - Working with Looker, Google BigQuery, and other BI tools. - Previous experience in a publishing and/or support role. - Contributing to (sometimes interconnected) business projects. - A technically minded approach to solutions. - Excellent numeracy skills with proven attention to detail. - Strong analytical skills with the ability to identify trends, patterns, and actionable insights from complex data. - SQL and programming languages such as Python. - Excellent organisation skills to manage multiple concurrent projects and competing priorities. - Excellent written and verbal communication skills; able to communicate with individuals at all levels and on a global scale. - Good problem solving/logic/analytical skills with the ability to capture and document requirements and processes. - Proactive and able to lead projects alone as well as part of a large group. - Degree or equivalent work experience. - Additional training or certification in data analysis, business intelligence, or data visualization desirable. Eligibility In accordance with our internal career movement guidance, 12 months in current role is a requirement before applying to a new role At Springer Nature, we value the diversity of our teams and work to build an inclusive culture, where people are treated fairly and can bring their differences to work and thrive. We empower our colleagues and value their diverse perspectives as we strive to attract, nurture and develop the very best talent. Springer Nature was awarded Diversity Team of the Year at the 2022 British Diversity Awards. Find out more about our DEI work here https://group.springernature.com/gp/group/taking-responsibility/diversity-equity-inclusion If you have any access needs related to disability, neurodivergence or a chronic condition, please contact us so we can make all necessary accommodation. For more information about career opportunities in Springer Nature please visit https://springernature.wd3.myworkdayjobs.com/SpringerNatureCareers Job Posting End Date: 25-07-2025,
Posted 2 weeks ago
0.0 - 3.0 years
0 Lacs
karnataka
On-site
You should have 6 months to 3 years of IT experience. You must have knowledge of Bigquery, SQL, or similar tools. It is essential to be aware of ETL and Data warehouse concepts. Your oral and written communication skills should be good. Being a great team player and able to work efficiently with minimal supervision is crucial. You should also have good knowledge of Java or Python to conduct data cleansing. Preferred qualifications include good communication and problem-solving skills. Experience on Spring Boot would be an added advantage. Being an Apache Beam developer with Google Cloud BigTable and Google BigQuery is desirable. Experience in Google Cloud Platform (GCP) is preferred. Skills in writing batch and stream processing jobs using Apache Beam Framework (Dataflow) are a plus. Knowledge of Microservices, Pub/Sub, Cloud Run, and Cloud Function would be beneficial.,
Posted 2 weeks ago
10.0 - 12.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
about randstad enterprise As the leading global talent solutions provider, Randstad Enterprise enables companies to create sustainable business value and agility by keeping people at the heart of their organizations. Part of Randstad N.V., we combine unmatched talent data and market insights with smart technologies and deep people expertise. Our integrated talent solutions - delivered by Randstad Advisory, Randstad Sourceright and Randstad RiseSmart - help companies build skilled and agile workforces that move their businesses forward. Randstad Enterprise supports some of the world's most renowned brands to build their talent acquisition and management models that not only meet their business needs today but also in the future. We offer solutions in Europe, Middle East and Africa (EMEA) region, Asia Pacific (APAC) region as well as in North America (NAM) region. This results in a digital way of working and requires a proactive mind-set. Our solutions know no limits, we have proven experience delivering market-leading MSP, RPO, Total Talent, and Services Procurement Solutions including technology, talent marketing, talent intelligence, and workforce consulting services. We create the best talent experience, from attraction to onboarding and onto ongoing career development, we understand the human and digital touchpoints that compel talent to join and stay with a company. We know where the talent of tomorrow is, how they behave, what they are looking for, and how to build their loyalty toward a specific company employer brand. We push the boundaries of our industry to be able to see around the corner for our clients, continually investing in innovation to stay ahead in our market. ... About the Job The Director Data Engineering will lead the development and implementation of a comprehensive data strategy that aligns with the organization's business goals and enables data driven decision making. Roles and Responsibilities Build and manage a team of talented data managers and engineers with the ability to not only keep up with, but also pioneer, in this space Collaborate with and influence leadership to directly impact company strategy and direction Develop new techniques and data pipelines that will enable various insights for internal and external customers Develop deep partnerships with client implementation teams, engineering and product teams to deliver on major cross-functional measurements and testing Communicate effectively to all levels of the organization, including executives Provide success in partnering teams with dramatically varying backgrounds, from the highly technical to the highly creative Design a data engineering roadmap and execute the vision behind it Hire, lead, and mentor a world-class data team Partner with other business areas to co-author and co-drive strategies on our shared roadmap Oversee the movement of large amounts of data into our data lake Establish a customer-centric approach and synthesize customer needs Own end-to-end pipelines and destinations for the transfer and storage of all data Manage 3rd-party resources and critical data integration vendors Promote a culture that drives autonomy, responsibility, perfection and mastery. Maintain and optimize software and cloud expenses to meet financial goals of the company Provide technical leadership to the team in design and architecture of data products and drive change across process, practices, and technology within the organization Work with engineering managers and functional leads to set directio n and ambitious goals for the Engineering department Ensure data quality, security, and accessibility across the organization Skills You Will Need 10+ years of experience in data engineering 5+ years of experience leading data teams of 30+ resources or more, including selection of talent planning / allocating resources across multiple geographies and functions. 5+ years of experience with GCP tools and technologies, specifically, Google BigQuery, Google cloud composer, Dataflow, Dataform, etc. Experience creating large-scale data engineering pipelines, data-based decision-making and quantitative analysis tools and software Experience with hands-on to code version control systems (git) Experience with CICD, data architectures, pipelines, quality, and code management Experience with complex, high volume, multi-dimensional data, based on unstructured, structured, and streaming datasets Experience with SQL and NoSQL databases Experience creating, testing, and supporting production software and systems Proven track record of identifying and resolving performance bottlenecks for production systems Experience designing and developing data lake, data warehouse, ETL and task orchestrating systems Strong leadership, communication, time management and interpersonal skills Proven architectural skills in data engineering Experience leading teams developing production-grade data pipelines on large datasets Experience designing a large data lake and lake house experience, managing data flows that integrate information from various sources into a common pool implementing data pipelines based on the ETL model Experience with common data languages (e.g. Python, Scala) and data warehouses (e.g. Redshift, BigQuery, Snowflake, Databricks) Extensive experience on cloud tools and technologies - GCP preferred Experience managing real-time data pipelines Successful track record and demonstrated thought-leadership and cross-functional influence and partnership within an agile / water-fall development environment. Experience in regulated industries or with compliance frameworks (e.g., SOC 2, ISO 27001). Nice to have: HR services industry experience Experience in data science, including predictive modeling Experience leading teams across multiple geographies
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
Genpact is a global professional services and solutions firm with a workforce of 125,000+ employees spanning across 30+ countries. Driven by innate curiosity, entrepreneurial agility, and the desire to create lasting value for clients, we serve and transform leading enterprises, including Fortune Global 500 companies. Our purpose, the relentless pursuit of a world that works better for people, powers our operations. We specialize in deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. We are currently seeking applications for the role of Lead Consultant - ETL Manual Tester. We are looking for an experienced Test Manager to oversee and manage testing activities for our Data/ETL (Extract, Transform, Load) program. The ideal candidate will ensure the quality and reliability of our data processing systems. This role involves developing testing strategies, managing a team of test engineers, and collaborating with other departments to ensure the successful delivery of the program. Responsibilities include: Test Strategy & Planning: - Develop and implement a comprehensive testing strategy for the Data transformation/ETL program. - Plan, design, and manage the execution of test cases, scripts, and procedures for data validation and ETL processes. - Oversee the preparation of test data and environments to meet complex data workflow requirements. Team Management & Leadership: - Lead and mentor a team of test engineers, setting clear goals and expectations and providing regular feedback. - Foster a culture of quality and continuous improvement within the team. - Coordinate with project managers, data engineers, and business analysts for effective communication and issue resolution. Testing & Quality Assurance: - Identify defects and issues in data processing and ETL workflows. - Implement and maintain quality assurance policies and procedures for data integrity and reliability. - Monitor and report on testing activities, including test results, defect tracking, and quality metrics. Stakeholder Engagement: - Act as the primary contact for all testing-related activities within the Data/ETL program. - Communicate testing progress, risks, and outcomes to program stakeholders. - Collaborate with business users to align testing strategies with business objectives. Technology & Tools: - Stay updated on the latest testing methodologies, tools, and technologies related to data and ETL processes. - Recommend and implement tools and technologies to enhance testing efficiency. - Ensure the testing team is trained and proficient in using testing tools and technologies. Qualifications we seek in you: - Bachelor's degree in computer science, Information Technology, or related field. - Experience in a testing role focusing on data and ETL processes. - Proven experience managing a testing team for large-scale data projects. - Strong understanding of data modeling, ETL processes, and data warehousing principles. - Proficiency in SQL and database technologies. - Experience with test automation tools and frameworks. - Excellent analytical, problem-solving, and communication skills. - Ability to work collaboratively in a team environment and manage multiple priorities. Preferred Skills: - Experience with cloud-based data warehousing solutions (AWS Redshift, Google BigQuery, Azure Synapse Analytics). - Knowledge of Agile methodologies and working in an Agile environment. If you meet the qualifications mentioned above and are passionate about testing and quality assurance, we invite you to apply for the Lead Consultant - ETL Manual Tester position based in Hyderabad, India.,
Posted 3 weeks ago
10.0 - 14.0 years
0 Lacs
hyderabad, telangana
On-site
You should have 9.5 to 13 years of experience in the field. Your primary skills should include Python and Google Bigquery. The job location can be Bangalore, Hyderabad, Kolkota, Pune, or Chennai. As part of your responsibilities, you will be required to design and develop robust database solutions to support business applications. It is mandatory that you have experience in Postgres development. You will also need to optimize SQL queries for high performance and efficiency. Collaborating with cross-functional teams to gather and analyze requirements will be essential. Additionally, implementing and maintaining database security measures and conducting regular database performance tuning are crucial aspects of the role. You will be responsible for developing and maintaining documentation for database systems and processes, providing technical support to junior developers, ensuring data integrity and consistency across all database systems, and performing data migration and transformation tasks as needed. Monitoring database systems for optimal performance and availability, developing and implementing backup and recovery strategies, and staying updated with industry trends and best practices in database management will also be part of your duties. To qualify for this position, you should possess a strong understanding of database architecture and design principles, demonstrate proficiency in SQL and ANSI SQL, have experience with database performance tuning and optimization, show the ability to troubleshoot and resolve complex database issues, exhibit excellent problem-solving and analytical skills, and display strong communication and collaboration abilities. Additionally, participating in code reviews and contributing to continuous improvement initiatives will be expected.,
Posted 3 weeks ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
About Us 6thstreet.com is one of the largest omnichannel fashion & lifestyle destinations in the GCC, home to 1200+ international brands. The fashion-savvy destination offers collections from over 150 international fashion brands such as Dune London, ALDO, Naturalizer, Nine West, Charles & Keith, New Balance, Crocs, Birkenstock, Skechers, Levi's, Aeropostale, Garage, Nike, Adidas Originals, Rituals, and many more. The online fashion platform also provides free delivery, free returns, cash on delivery, and the option for click and collect. Job Description We are looking for a seasoned Data Engineer to design and manage data solutions. Expertise in SQL, Python, and AWS is essential. The role includes client communication, recommending modern data tools, and ensuring smooth data integration and visualization. Strong problem-solving and collaboration skills are crucial. Responsibilities Understand and analyze client business requirements to support data solutions. Recommend suitable modern data stack tools based on client needs. Develop and maintain data pipelines, ETL processes, and data warehousing. Create and optimize data models for client reporting and analytics. Ensure seamless data integration and visualization with cross-functional teams. Communicate with clients for project updates and issue resolution. Stay updated on industry best practices and emerging technologies. Skills Required 3-5 years in data engineering/analytics with a proven track record. Proficient in SQL and Python for data manipulation and analysis. Knowledge of Pyspark is a plus. Experience with data warehouse platforms like Redshift and Google BigQuery. Experience with AWS services like S3, Glue, Athena. Proficient in Airflow. Familiarity with event tracking platforms like GA or Amplitude is a plus. Strong problem-solving skills and adaptability. Excellent communication skills and proactive client engagement. Ability to get things done, unblock yourself, and effectively collaborate with team members and clients. Benefits Full-time role. Competitive salary + bonus. Company employee discounts across all brands. Medical & health insurance. Collaborative work environment. Good vibes work culture. Medical insurance.,
Posted 3 weeks ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
The ideal candidate for this position should possess excellent communication skills along with proficiency in Agile methodologies using Scrum and Jira. Strong experience in data asset design, including data modeling and SQL, is required. Familiarity with Google BigQuery and integrations such as file, NAS drive, and APIs is essential. As a key member of the team, you will be responsible for leading the agile development pod, collaborating with functional analysts and the product owner to design and implement a data warehouse on Google Cloud Platform for procurement and corporate real estate HSBC restricted data. Additionally, you will lead and support a team of developers based in Hyderabad. Your role will involve designing data models, developing transformations in Google BigQuery, and implementing solutions on HSBC's Google Cloud Platform. Your contributions will play a crucial role in the successful delivery of projects and meeting the organization's objectives. About the Company: Purview is a prominent Digital Cloud & Data Engineering company headquartered in Edinburgh, United Kingdom, with a global presence in 14 countries, including India (Hyderabad, Bangalore, Chennai, and Pune). We have a strong foothold in the UK, Europe, and APEC regions, providing services to Captive Clients and top-tier IT organizations. Our commitment to delivering innovative solutions and services has established us as a trusted partner in the industry. Company Information: Address (India): 3rd Floor, Sonthalia Mind Space Near Westin Hotel, Gafoor Nagar Hitechcity, Hyderabad Phone: +91 40 48549120 / +91 8790177967 Address (UK): Gyleview House, 3 Redheughs Rigg, South Gyle, Edinburgh, EH12 9DQ. Phone: +44 7590230910 Email: careers@purviewservices.com If you meet the qualifications and are excited about the opportunity to contribute to our dynamic team, we encourage you to apply by logging in to our application portal.,
Posted 3 weeks ago
5.0 - 9.0 years
0 Lacs
maharashtra
On-site
As a Data Engineer at Blis, you will be part of a globally recognized and award-winning team that specializes in big data analytics and advertising. We collaborate with iconic brands like McDonald's, Samsung, and Mercedes Benz, providing precise audience insights to help them target their ideal customers effectively. Upholding ethical data practices and privacy rights is at the core of our operations, and we are committed to ensuring outstanding performance and reliability in all our systems. Working at Blis means being part of an international company with a diverse culture, spanning across four continents and comprising over 300 team members. Headquartered in the UK, we are financially successful and poised for continued growth, offering you an exciting opportunity to contribute to our journey. Your primary responsibility as a Data Engineer will involve designing and implementing high-performance data pipelines on Google Cloud Platform (GCP) to handle massive amounts of data efficiently. With a focus on scalability and automation, you will play a crucial role in building secure pipelines that can process over 350GB of data per hour and respond to 400,000 decision requests each second. Your expertise will be instrumental in driving improvements in data architecture, optimizing resource utilization, and delivering fast, accurate insights to stakeholders. Collaboration is key at Blis, and you will work closely with product and engineering teams to ensure that our data infrastructure evolves to support new initiatives seamlessly. Additionally, you will mentor and support team members, fostering a collaborative environment that encourages knowledge sharing, innovation, and professional growth. To excel in this role, you should have at least 5 years of hands-on experience with large-scale data systems, with a strong focus on designing and maintaining efficient data pipelines. Proficiency in Apache Druid and Imply platforms, along with expertise in cloud-based services like GCP, is essential. You should also have a solid understanding of Python for building and optimizing data flows, as well as experience with data governance and quality assurance practices. Furthermore, familiarity with event-driven architectures, tools like Apache Airflow, and distributed processing frameworks such as Spark will be beneficial. Your ability to apply complex algorithms and statistical techniques to large datasets, along with experience in working with relational databases and non-interactive reporting solutions, will be valuable assets in this role. Joining the Blis team means engaging in high-impact work in a data-intensive environment, collaborating with brilliant engineers, and being part of an innovative culture that prioritizes client obsession and agility. With a global reach and a commitment to diversity and inclusion, Blis offers a dynamic work environment where your contributions can make a tangible difference in the world of advertising technology.,
Posted 3 weeks ago
0.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description: A day in the life of an Infoscion As part of the Infosys delivery team your primary role would be to ensure effective Design Development Validation and Support activities to assure that our clients are satisfied with the high levels of service in the technology domain You will gather the requirements and specifications to understand the client requirements in a detailed manner and translate the same into system requirements You will play a key role in the overall estimation of work requirements to provide the right information on project estimations to Technology Leads and Project Managers You would be a key contributor to building efficient programs systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey this is the place for you If you think you fit right in to help our clients navigate their next in their digital transformation journey this is the place for you Key Responsibilities: Knowledge of design principles and fundamentals of architecture Understanding of performance engineering Knowledge of quality processes and estimation techniques Basic understanding of project domain Ability to translate functional nonfunctional requirements to systems requirements Ability to design and code complex programs Ability to write test cases and scenarios based on the specifications Good understanding of SDLC and agile methodologies Awareness of latest technologies and trends Logical thinking and problem solving skills along with an ability to collaborate Technical Requirements: Technology Cloud Platform GCP Database Google BigQuery Preferred Skills: Technology->Cloud Platform->GCP Database,Technology->Cloud Security->GCP - Infrastructure Security
Posted 3 weeks ago
5.0 - 10.0 years
15 - 30 Lacs
Bengaluru
Hybrid
Job Description: We are seeking an experienced Data Engineer for a contract/consulting engagement to design, build, and maintain scalable data infrastructure using Google Cloud Platform technologies and advanced analytics visualization. This role requires 5-8 years of hands-on experience in modern data engineering practices with a strong focus on cloud-native solutions and business intelligence. Key Responsibilities: Data Infrastructure & Engineering (70%) Experience in designing tables and working with complex queries in Google BigQuery Build and maintain data transformation workflows using Dataflow, Dataform Design and implement robust data pipelines using Apache Airflow for workflow orchestration and scheduling Architect scalable ETL/ELT processes handling large-scale data ingestion from multiple sources Optimize BigQuery performance through partitioning, clustering, and cost management strategies Collaborate with DevOps teams to implement CI/CD pipelines for data infrastructure Solid technical background with a complete understanding of Data Warehouse Modeling, architectures, OLAP, OLTP data sets, etc Experience with Java, Python will be plus Analytics & Visualization (30%) Create compelling data visualizations and interactive dashboards using Tableau Experience in designing Tableau models with Live & Extract data source. Good to have experience in Tableau Prep Partner with business stakeholders to translate requirements into analytical solutions Design and implement self-service analytics capabilities for end users Optimize Tableau workbooks for performance and user experience Integrate Tableau with BigQuery for real-time analytics and reporting Technical Skills Core Data Engineering (Must Have) 5-8 years of progressive experience in data engineering roles Expert-level proficiency in SQL with complex query optimization experience Hands-on experience with Google BigQuery for data warehousing and analytics Proven experience with Apache Airflow for workflow orchestration and pipeline management Working knowledge of Dataflow and Dataform for data transformation and modeling Experience with GCP services: Cloud Storage, Pub/Sub, Cloud Functions, Cloud Composer Visualization & Analytics Strong proficiency in Tableau for data modeling, data visualization and dashboard development Experience integrating Tableau with cloud data platforms Understanding of data visualization best practices and UX principles Knowledge of Tableau Server/Cloud administration and governance Additional Technical Requirements Experience with version control systems (Git) and collaborative development practices Knowledge of data modeling techniques (dimensional modeling, data vault) Understanding of data governance, security, and compliance frameworks Experience with infrastructure as code (Terraform preferred) Familiarity with scripting languages (Python/Java) for data processing Preferred Qualifications Google Cloud Professional Data Engineer certification Tableau Desktop Certified Professional or equivalent certification Experience with real-time data processing and streaming analytics Knowledge of machine learning workflows and MLOps practices Previous experience in agile development environments Role & responsibilities Preferred candidate profile
Posted 3 weeks ago
8.0 - 10.0 years
15 - 30 Lacs
Bengaluru
Work from Office
Job Summary: We are seeking a highly motivated and experienced SAP Analytics Cloud Consultant to join our growing team. The ideal candidate will possess strong expertise in SAP Analytics Cloud (SAC), SAP DataSphere, and Google BigQuery, with a proven track record of designing, developing, and implementing comprehensive analytics solutions. As a key member of our team, you will be responsible for understanding business requirements, translating them into technical specifications, and delivering high-quality, scalable, and insightful analytics dashboards and reports. Responsibilities: Collaborate with business stakeholders to gather and document business requirements for analytics solutions. Design and develop interactive dashboards, reports, and visualizations using SAP Analytics Cloud (SAC). Model and transform data within SAP DataSphere to create reusable data models for SAC. Integrate data from various sources, including SAP systems and Google BigQuery, into SAC and DataSphere. Optimize SAC performance for large datasets and complex calculations. Develop and maintain technical documentation, including design specifications, data flow diagrams, and user guides. Provide training and support to end-users on SAC functionality and best practices. Stay up-to-date with the latest SAP Analytics Cloud features and functionalities. Troubleshoot and resolve issues related to SAC, DataSphere, and BigQuery integrations. Participate in project planning, estimation, and status reporting. Adhere to established development standards and best practices. Qualifications: Minimum of 8 years of experience in SAP Analytics Cloud (SAC) development and implementation. Solid understanding of SAP DataSphere concepts and data modeling techniques. Experience with Google BigQuery, including data loading, querying, and performance optimization. Proficiency in data visualization principles and best practices. Strong analytical and problem-solving skills. Excellent communication and interpersonal skills. Ability to work independently and as part of a team. Preferred Qualifications: SAP Analytics Cloud certification. Experience with SAP BW/4HANA. Knowledge of other data visualization tools such as Tableau or Power BI. Experience with agile development methodologies. Technical Skills: SAP Analytics Cloud (SAC) SAP DataSphere Google BigQuery SQL Data Modeling Data Visualization Role & responsibilities
Posted 1 month ago
10.0 - 14.0 years
30 - 45 Lacs
Hyderabad
Work from Office
Bachelors in computer science, Information Systems, or a related field Minimum of 10+ years of experience in data architecture with a minimum of 1-3 years of experience in healthcare domain Strong hands-on experience with Cloud databases such as Snowflake, Aurora, Google BigQuery etc. Experience in designing OLAP and OLTP systems for efficient data analysis and processing. Strong handson experience with enterprise BI/Reporting tools like (Looker, AWS QuickSight, PowerBI, Tableau and Cognos). A strong understanding of HIPAA regulations and healthcare data privacy laws is a must-have for this role, as the healthcare domain requires strict adherence to data privacy and security regulations. Experience in data privacy and tokenization tools like Immuta, Privacera, Privitar OpenText and Protegrity. Experience with multiple full life-cycle data warehouse/transformation implementations in the public cloud (AWS, Azure, and GCP) with Deep technical knowledge in one. Proven experience working as an Enterprise Data Architect or a similar role, preferably in large-scale organizations. Proficient in Data modelling (Star Schema (de-normalized data model), Transactional Model (Normalized data model) using tools like Erwin. Experience with ETL/ETL architecture and integration (Matillion, AWS GLUE, Google PLEX, Azure Data Factory etc) Deep understanding of data architectures that utilize Data Fabric, Data Mesh, and Data Products implementation. Business & financial acumen to advise on product planning, conduct research & analysis, and identify the business value of new and emerging technologies. Strong SQL and database skills working with large structured and unstructured data. Experienced in Implementation of data virtualization, and semantic model driven architecture. System development lifecycle (SDLC), Agile Development, DevSecOps, and standard software development tools such as Git and Jira Excellent written and oral communication skills to convey key choices, recommendations, and technology concepts to technical and non-technical audiences. Familiarity with AI/MLOps concepts and Generative AI technology.
Posted 1 month ago
5.0 - 8.0 years
8 - 16 Lacs
Bengaluru, Delhi / NCR, Mumbai (All Areas)
Work from Office
Must have skills : Apache Spark Good to have skills : NA Educational Qualification : minimum 15 years of full time education Share CV on - neha.mandal@mounttalent.com Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have Skills : Apache Spark Good to Have Skills : Job Requirements : Key Responsibilities : As a Software Development Engineer you will be responsible for analyzing designing coding and testing multiple components of application code using Apache Spark across one or more clients Your typical day will involve performing maintenance enhancements andor development work using Google BigQuery Python and PySpark Technical Experience : Design develop and maintain Apache Spark applications using Google BigQuery Python and PySpark\nAnalyze design code and test multiple components of application code across one or more clients\nPerform maintenance enhancements andor development work using Apache Spark\nCollaborate with crossfunctional teams to identify and resolve technical issues and ensure timely delivery of highquality software solutions Professional Attributes : Proficiency in Apache Spark\nExperience with Google BigQuery Python and PySpark\nStrong understanding of software engineering principles and best practices\nExperience with software development methodologies such as Agile and Scrum Educational Qualification: minimum 15 years of full time education Additional Information : minimum 15 years of full time education
Posted 1 month ago
4.0 - 9.0 years
9 - 19 Lacs
Pune, Bengaluru, Mumbai (All Areas)
Hybrid
Project Role : AI / ML Engineer Project Role Description : Develops applications and systems that utilize AI tools, Cloud AI services, with proper cloud or on-prem application pipeline with production ready quality. Be able to apply GenAI models as part of the solution. Could also include but not limited to deep learning, neural networks, chatbots, image processing. Must have skills : Google Cloud Machine Learning Services Summary: As an AI/ML Engineer, you will be responsible for developing applications and systems that utilize AI tools and Cloud AI services. Your typical day will involve applying GenAI models, developing cloud or on-prem application pipelines, and ensuring production-ready quality. You will also work with deep learning, neural networks, chatbots, and image processing. Key Responsibilities : A: Demonstrate various Google specific Designs using effective POCs, as per client requirements. B: Identifying applicability of Google Cloud AI services to use cases with ability to project both the business and tech benefits. C: Design, build and productionizes ML models to solve business challenges using Google Cloud technologies D: Lead and guide team of data scientists E: Coordinating and collaborating with cross-functional teams Technical Experience : A: Min 3 + years of experience on GCP AWS ML B: Exposure to Google Gen AI Services C: Exposure to GCP and its Compute, Storage, Data, Network, Security services D: Expert programming skills in any one of Java, Python, Spark E: Knowledge of GKE, Kubeflow on GCP would be good to have G: Experience in Vertex AI for building and managing the ML models H: Experience in implementing MLOps Location: Pan India
Posted 1 month ago
4.0 - 6.0 years
6 - 8 Lacs
Bengaluru
Work from Office
Job Title Security Delivery Senior Analyst Management Level: 10 - Senior Analyst Location: Bengaluru Must have skills: Node.js, PostgreSQL, AWS, Azure DevOps, Agile, CI/CD, Strong Communication, Estimation (for level 8/9) Good to have skills: Application Security, AWS Fargate, Google BigQuery Job Summary : The ISD backend developer will be responsible for writing code for the upcoming changes and operational tasks. The application is built in AWS Cloud Native architecture. The application is written in AngularJS, Node.js, both in TypeScript. The developer must be skilled in Node.js, PostgreSQL, AWS and be familiar with agile concepts and automated CI/CD including unit testing. Roles & Responsibilities: The backend developer will be responsible for supporting a custom-built dashboard in AngularJS and Node.js. Level 9/8 developers must prioritize work, estimate work, and assist other developers. The Developer will also take part in future project planning and estimation. Professional & Technical Skills: Technical Experience: The backend developer must be skilled in NodeJS and PostgreSQL and have working knowledge of TypeScript and PostgreSQL. The Developer must also be experienced in Continuous Integration/Continuous Deployment (CI/CD) to automate builds and deployments. Professional Experience: The backend developer must be self-motivated with excellent communication skills. The developer should be able to work with the lead(s) to solve complex development challenges, perform peer/quality reviews and maintain the teams code repository and deployment activities. Additional Information: Qualification Experience: Minimum 4+ years of experience is required Educational Qualification: Any Degree
Posted 1 month ago
6.0 - 10.0 years
10 - 15 Lacs
Mumbai
Work from Office
Data Scientist (Cloud Management, SQL, Building cloud data pipelines, Python, Power BI, GCP) Job Summary UPS Marketing team is looking for a talented and driven Data Scientist to drive its strategic objectives in the areas of pricing, revenue management, market analysis and evidence/data-based decision making. This role will work across multiple channels and teams to drive tangible results in the organization. You will focus on developing metrics for multiple channels and markets, applying advanced statistical modeling where appropriate and pioneering new analytical methods in a variety of fast paced and rapidly evolving consumer channels. This high visibility position will work with multiple levels of the organization, including senior leadership to bring analytical capabilities to the forefront of pricing, rate setting, and optimization of our go-to-market offers. You will contribute to rapidly evolving UPS Marketing analytical capabilities by working amongst a collaborative team of Data Scientists, Analysts and multiple business stakeholders. Responsibilities: Become a subject matter expert on UPS business processes, data and analytical capabilities to help define and solve business needs using data and advanced statistical methods Analyze and extract insights from large-scale structured and unstructured data utilizing multiple platforms and tools. Understand and apply appropriate methods for cleaning and transforming data Work across multiple stake holders to develop, maintain and improve models in production Take the initiative to create and execute analyses in a proactive manner Deliver complex analytical and visualizations to broader audiences including upper management and executives Deliver analytics and insights to support strategic decision making Understand the application of AI/ML when appropriate to solve complex business problems Qualifications Expertise in R, SQL, Python. Strong analytical skills and attention to detail. Able to engage key business and executive-level stakeholders to translate business problems to high level analytics solution approach. Expertise with statistical techniques, machine learning or operations research and their application in business applications. Deep understanding of data management pipelines and experience in launching moderate scale advanced analytics projects in production at scale. Proficient in Azure, Google Cloud environment Experience implementing open-source technologies and cloud services; with or without the use of enterprise data science platforms. Solid oral and written communication skills, especially around analytical concepts and methods. Ability to communicate data through a story framework to convey data-driven results to technical and non-technical audience. Masters Degree in a quantitative field of mathematics, computer science, physics, economics, engineering, statistics (operations research, quantitative social science, etc.), international equivalent, or equivalent job experience. Bonus Qualifications Experience with pricing methodologies and revenue management Experience using PySpark, Azure Databricks, Google BigQuery and Vertex AI Creating and implementing NLP/LLM projects Experience utilizing and applying neurals networks and other AI methodologies Familiarity with Data architecture and engineering
Posted 1 month ago
8.0 - 12.0 years
12 - 22 Lacs
Hyderabad, Secunderabad
Work from Office
Proficiency in SQL, Python, and data pipeline frameworks such as Apache Spark, Databricks, or Airflow. Hands-on experience with cloud data platforms (e.g., Azure Synapse, AWS Redshift, Google BigQuery). Strong understanding of data modeling, ETL/ELT, and data lake/warehouse/ Datamart architectures. Knowledge on Data Factory or AWS Glue Experience in developing reports and dashboards using tools like Power BI, Tableau, or Looker.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough