Jobs
Interviews

3678 Redshift Jobs - Page 17

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 years

7 - 8 Lacs

Hyderābād

On-site

Full-time Employee Status: Regular Role Type: Hybrid Department: Analytics Schedule: Full Time Company Description Experian is a global data and technology company, powering opportunities for people and businesses around the world. We help to redefine lending practices, uncover and prevent fraud, simplify healthcare, create marketing solutions, and gain deeper insights into the automotive market, all using our unique combination of data, analytics and software. We also assist millions of people to realize their financial goals and help them save time and money. We operate across a range of markets, from financial services to healthcare, automotive, agribusiness, insurance, and many more industry segments. We invest in people and new advanced technologies to unlock the power of data. As a FTSE 100 Index company listed on the London Stock Exchange (EXPN), we have a team of 22,500 people across 32 countries. Our corporate headquarters are in Dublin, Ireland. Learn more at experianplc.com. Job Description Senior Data Engineer is responsible for design, develop and support ETL data pipelines solutions primary in AWS environment Design, develop, and maintain scaled ETL process to deliver meaningful insights from large and complicated data sets. Work as part of a team to build out and support data warehouse, implement solutions using PySpark to process structured and unstructured data. Play key role in building out a semantic layer through development of ETLs and virtualized views. Collaborate with Engineering teams to discovery and leverage new data being introduced into the environment Support existing ETL processes written in SQL, or leveraging third party APIs with Python, troubleshoot and resolve production issues. Strong SQL and data to understand and troubleshoot existing complex SQL. Hands-on experience with Apache Airflow or equivalent tools (AWS MWAA) for orchestration of data pipelines Create and maintain report specifications and process documentations as part of the required data deliverables. Serve as liaison with business and technical teams to achieve project objectives, delivering cross functional reporting solutions. Troubleshoot and resolve data, system, and performance issues Communicating with business partners, other technical teams and management to collect requirements, articulate data deliverables, and provide technical designs. Qualifications you have completed graduation from BE/Btech 6 to 9 years of experience in Data Engineering development 5 years of experience in Python scripting You should have 8 years experience in SQL, 5+years in Datawarehouse, 5yrs in Agile and 3yrs with Cloud 3 years of experience with AWS ecosystem (Redshift, EMR, S3, MWAA) 5 years of experience in Agile development methodology You will work with the team to create solutions Proficiency in CI/CD tools (Jenkins, GitLab, etc.) Additional Information Our uniqueness is that we celebrate yours. Experian's culture and people are important differentiators. We take our people agenda very seriously and focus on what matters; DEI, work/life balance, development, authenticity, collaboration, wellness, reward & recognition, volunteering... the list goes on. Experian's people first approach is award-winning; World's Best Workplaces™ 2024 (Fortune Top 25), Great Place To Work™ in 24 countries, and Glassdoor Best Places to Work 2024 to name a few. Check out Experian Life on social or our Careers Site to understand why. Experian is proud to be an Equal Opportunity and Affirmative Action employer. Innovation is an important part of Experian's DNA and practices, and our diverse workforce drives our success. Everyone can succeed at Experian and bring their whole self to work, irrespective of their gender, ethnicity, religion, colour, sexuality, physical ability or age. If you have a disability or special need that requires accommodation, please let us know at the earliest opportunity. #LI-Onsite Benefits Experian care for employee's work life balance, health, safety and wellbeing. 1) In support of this endeavor, we offer the best family well-being benefits, 2) Enhanced medical benefits and paid time off. Experian Careers - Creating a better tomorrow together

Posted 1 week ago

Apply

7.0 years

10 - 24 Lacs

Hyderābād

On-site

Data Engineer Location - Hyderabad Skills - Azure cloud service, Azure cosmosdb, Pyhton, Redshift, Tsql, Azure synapse, Databricks, Data factory. Immediate Joiner 7+ years of experience Job Type: Full-time Pay: ₹1,000,000.00 - ₹2,400,000.00 per year Schedule: Day shift Morning shift Application Question(s): Notice Period? Current Salary? Expected Salary? Current Location? How much experience in Azure cloud service, Azure cosmosdb, Pyhton, Redshift, Tsql, Azure synapse, Databricks, Data factory.? Work Location: In person

Posted 1 week ago

Apply

2.0 years

6 - 8 Lacs

Hyderābād

Remote

- 2+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience - Experience building and maintaining basic data artifacts (e.g., ETL, data models, queries) - Experience with one or more industry analytics visualization tools (e.g. Excel, Tableau, QuickSight, MicroStrategy, PowerBI) and statistical methods (e.g. t-test, Chi-squared) - Experience with scripting language (e.g., Python, Java, or R) - Experience applying basic statistical methods (e.g. regression) to difficult business problems Want to join the Earth’s most customer centric company? Do you like to dive deep to understand problems? Are you someone who likes to challenge Status Quo? Do you strive to excel at goals assigned to you? If yes, we have opportunities for you. Global Operations – Artificial Intelligence (GO-AI) at Amazon is looking to hire candidates who can excel in a fast-paced dynamic environment. Are you somebody that likes to use and analyze big data to drive business decisions? Do you enjoy converting data into insights that will be used to enhance customer decisions worldwide for business leaders? Do you want to be part of the data team which measures the pulse of innovative machine vision-based projects? If your answer is yes, join our team. GO-AI is looking for a motivated individual with strong skills and experience in resource utilization planning, process optimization and execution of scalable and robust operational mechanisms, to join the GO-AI Ops DnA team. In this position you will be responsible for supporting our sites to build solutions for the rapidly expanding GO-AI team. The role requires the ability to work with a variety of key stakeholders across job functions with multiple sites. We are looking for an entrepreneurial and analytical program manager, who is passionate about their work, understands how to manage service levels across multiple skills/programs, and who is willing to move fast and experiment often. Key job responsibilities Key responsibilities include: • Ability to maintain and refine straightforward ETL and write secure, stable, testable, maintainable code with minimal defects and automate manual processes. • Proficiency in one or more industry analytics visualization tools (e.g. Excel, Tableau/Quicksight/PowerBI) and, as needed, statistical methods (e.g. t-test, Chi-squared) to deliver actionable insights to stakeholders. • Building and owning small to mid-size BI solutions with high accuracy and on time delivery using data sets, queries, reports, dashboards, analyses or components of larger solutions to answer straightforward business questions with data incorporating business intelligence best practices, data management fundamentals, and analysis principles. • Good understanding of the relevant data lineage: including sources of data; how metrics are aggregated; and how the resulting business intelligence is consumed, interpreted and acted upon by the business where the end product enables effective, data-driven business decisions. • Having high responsibility for the code, queries, reports and analyses that are inherited or produced and having analyses and code reviewed periodically. • Effective partnering with peer BIEs and others in your team to troubleshoot, research root causes, propose solutions, by either take ownership for their resolution or ensure a clear hand-off to the right owner. About the team The Global Operations – Artificial Intelligence (GO-AI) team is an initiative, which remotely handles exceptions in the Amazon Robotic Fulfillment Centers Globally. GO-AI seeks to complement automated vision based decision-making technologies by providing remote human support for the subset of tasks which require higher cognitive ability and cannot be processed through automated decision making with high confidence. This team provides end-to-end solutions through inbuilt competencies of Operations and strong central specialized teams to deliver programs at Amazon scale. It is operating multiple programs including Nike IDS, Proteus, Sparrow and other new initiatives in partnership with global technology and operations teams. Master's degree, or Advanced technical degree Experience with statistical analysis, co-relation analysis Knowledge of how to improve code quality and optimizes BI processes (e.g. speed, cost, reliability) Excellence in technical communication with peers, partners, and non-technical cohorts Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 1 week ago

Apply

6.0 years

8 - 23 Lacs

Hyderābād

On-site

Position – Data Engineer Exp – 6-8 Years Location - Hyderabad, INDIA Budget – open Budget based on interview Not Many Job Switches Can be from a Reputed College like ( IIM or IIT) Should & Must have SaaS product experience Mongo DB – Mandatory Good understanding of Database systems -- SQL and No SQL Must have comprehensive experience in MongoDB or any other document DB Responsibilities:  Design, build, and optimize data pipelines to ingest, process, transform, and load data from various sources into our data platform  Implement and maintain ETL workflows using tools like Debezium, Kafka, Airflow, and Jenkins to ensure reliable and timely data processing  Develop and optimize SQL and NoSQL database schemas, queries, and stored procedures for efficient data retrieval and processing  Work with both relational databases (MySQL, PostgreSQL) and NoSQL databases (MongoDB,DocumentDB) to build scalable data solutions  Design and implement data warehouse solutions that support analytical needs and machine learning applications  Collaborate with data scientists and ML engineers to prepare data for AI/ML models and implement data-driven features  Implement data quality checks, monitoring, and alerting to ensure data accuracy and reliability  Optimize query performance across various database systems through indexing, partitioning,and query refactoring  Develop and maintain documentation for data models, pipelines, and processes  Collaborate with cross-functional teams to understand data requirements and deliver solutions that meet business needs  Stay current with emerging technologies and best practices in data engineering Requirements:  6+ years of experience in data engineering or related roles with a proven track record of building data pipelines and infrastructure  Strong proficiency in SQL and experience with relational databases like MySQL and PostgreSQL  Hands-on experience with NoSQL databases such as MongoDB or AWS DocumentDB  Expertise in designing, implementing, and optimizing ETL processes using tools like Kafka,Debezium, Airflow, or similar technologies  Experience with data warehousing concepts and technologies  Solid understanding of data modeling principles and best practices for both operational and analytical systems  Proven ability to optimize database performance, including query optimization, indexing strategies, and database tuning  Experience with AWS data services such as RDS, Redshift, S3, Glue, Kinesis, and ELK stack  Proficiency in at least one programming language (Python, Node.js, Java)  Experience with version control systems (Git) and CI/CD pipelines  Bachelor's degree in Computer Science, Engineering, or related field Preferred Qualifications:  Experience with graph databases (Neo4j, Amazon Neptune)  Knowledge of big data technologies such as Hadoop, Spark, Hive, and data lake architectures  Experience working with streaming data technologies and real-time data processing  Familiarity with data governance and data security best practices  Experience with containerization technologies (Docker, Kubernetes)  Understanding of financial back-office operations and FinTech domain  Experience working in a high-growth startup environment Job Type: Permanent Pay: ₹862,603.66 - ₹2,376,731.02 per year Benefits: Health insurance Provident Fund Supplemental Pay: Performance bonus Yearly bonus Experience: ETL: 7 years (Preferred) HADOOP : 1 year (Preferred) Work Location: In person Application Deadline: 27/07/2025 Expected Start Date: 25/07/2025

Posted 1 week ago

Apply

2.0 years

6 - 8 Lacs

Hyderābād

Remote

DESCRIPTION Want to join the Earth’s most customer centric company? Do you like to dive deep to understand problems? Are you someone who likes to challenge Status Quo? Do you strive to excel at goals assigned to you? If yes, we have opportunities for you. Global Operations – Artificial Intelligence (GO-AI) at Amazon is looking to hire candidates who can excel in a fast-paced dynamic environment. Are you somebody that likes to use and analyze big data to drive business decisions? Do you enjoy converting data into insights that will be used to enhance customer decisions worldwide for business leaders? Do you want to be part of the data team which measures the pulse of innovative machine vision-based projects? If your answer is yes, join our team. GO-AI is looking for a motivated individual with strong skills and experience in resource utilization planning, process optimization and execution of scalable and robust operational mechanisms, to join the GO-AI Ops DnA team. In this position you will be responsible for supporting our sites to build solutions for the rapidly expanding GO-AI team. The role requires the ability to work with a variety of key stakeholders across job functions with multiple sites. We are looking for an entrepreneurial and analytical program manager, who is passionate about their work, understands how to manage service levels across multiple skills/programs, and who is willing to move fast and experiment often. Key job responsibilities Key responsibilities include: Ability to maintain and refine straightforward ETL and write secure, stable, testable, maintainable code with minimal defects and automate manual processes. Proficiency in one or more industry analytics visualization tools (e.g. Excel, Tableau/Quicksight/PowerBI) and, as needed, statistical methods (e.g. t-test, Chi-squared) to deliver actionable insights to stakeholders. Building and owning small to mid-size BI solutions with high accuracy and on time delivery using data sets, queries, reports, dashboards, analyses or components of larger solutions to answer straightforward business questions with data incorporating business intelligence best practices, data management fundamentals, and analysis principles. Good understanding of the relevant data lineage: including sources of data; how metrics are aggregated; and how the resulting business intelligence is consumed, interpreted and acted upon by the business where the end product enables effective, data-driven business decisions. Having high responsibility for the code, queries, reports and analyses that are inherited or produced and having analyses and code reviewed periodically. Effective partnering with peer BIEs and others in your team to troubleshoot, research root causes, propose solutions, by either take ownership for their resolution or ensure a clear hand-off to the right owner. About the team The Global Operations – Artificial Intelligence (GO-AI) team is an initiative, which remotely handles exceptions in the Amazon Robotic Fulfillment Centers Globally. GO-AI seeks to complement automated vision based decision-making technologies by providing remote human support for the subset of tasks which require higher cognitive ability and cannot be processed through automated decision making with high confidence. This team provides end-to-end solutions through inbuilt competencies of Operations and strong central specialized teams to deliver programs at Amazon scale. It is operating multiple programs including Nike IDS, Proteus, Sparrow and other new initiatives in partnership with global technology and operations teams. BASIC QUALIFICATIONS 2+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience building and maintaining basic data artifacts (e.g., ETL, data models, queries) Experience with one or more industry analytics visualization tools (e.g. Excel, Tableau, QuickSight, MicroStrategy, PowerBI) and statistical methods (e.g. t-test, Chi-squared) Experience with scripting language (e.g., Python, Java, or R) Experience applying basic statistical methods (e.g. regression) to difficult business problems PREFERRED QUALIFICATIONS Master's degree, or Advanced technical degree Experience with statistical analysis, co-relation analysis Knowledge of how to improve code quality and optimizes BI processes (e.g. speed, cost, reliability) Excellence in technical communication with peers, partners, and non-technical cohorts Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Job details IND, TS, Hyderabad Corporate Operations

Posted 1 week ago

Apply

3.0 years

6 - 8 Lacs

Hyderābād

On-site

DESCRIPTION As part of the AWS Solutions organization, we have a vision to provide business applications, leveraging Amazon’s unique experience and expertise, that are used by millions of companies worldwide to manage day-to-day operations. We will accomplish this by accelerating our customers’ businesses through delivery of intuitive and differentiated technology solutions that solve enduring business challenges. We blend vision with curiosity and Amazon’s real-world experience to build opinionated, turnkey solutions. Where customers prefer to buy over build, we become their trusted partner with solutions that are no-brainers to buy and easy to use. We’re trying to optimize shopping experience for Amazon’s Customers in the Physical retail space. This role will be a key member of the core Analytics team located in Hyderabad. The successful candidate will be a self-starter comfortable with ambiguity, with strong attention to detail, an ability to work in a fast-paced, high-energy and ever-changing environment. The drive and capability to shape the business group strategy is a must. Key job responsibilities Analyze and visualize transaction data to determine customer behaviors, and output solid analysis report with recommendation Design and drive experiments to form actionable recommendations. Present recommendations to business leaders and drive decisions. Also manage implementation of those recommendations. Develop metrics that helps support product category growth and expansion plans Serve as liaison between the Business and technical teams to achieve the goal of providing actionable insights into current business performance, and ad hoc investigations into future improvements or innovations. This will require data gathering and manipulation, synthesis and modeling, problem solving, and communication of insights and recommendations About the team Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. BASIC QUALIFICATIONS Bachelor's degree in BI, finance, engineering, statistics, computer science, mathematics, finance or equivalent quantitative field 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Knowledge of advanced skills in Excel as well as any data visualization tools like Tableau or similar BI tools (familiarity with Tableau preferred) PREFERRED QUALIFICATIONS Master's degree in BI, finance, engineering, statistics, computer science, mathematics, finance or equivalent quantitative field Knowledge of SQL and data warehousing concepts Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Job details IND, TS, Hyderabad Business Intelligence

Posted 1 week ago

Apply

3.0 - 7.0 years

2 - 10 Lacs

India

Remote

Job Title: ETL Automation Tester (SQL, Python, Cloud) Location: [On-site / Remote / Hybrid – City, State or “Anywhere, USA”] Employment Type: [Full-time / Contract / C2C / Part Time ] NOTE : Candidate has to work US Night Shifts Job Summary: We are seeking a highly skilled ETL Automation Tester with expertise in SQL , Python scripting , and experience working with Cloud technologies such as Azure, AWS, or GCP . The ideal candidate will be responsible for designing and implementing automated testing solutions to ensure the accuracy, performance, and reliability of ETL pipelines and data integration processes. Key Responsibilities: Design and implement test strategies for ETL processes and data pipelines. Develop automated test scripts using Python and integrate them into CI/CD pipelines. Validate data transformations and data integrity across source, staging, and target systems. Write complex SQL queries for test data creation, validation, and result comparison. Perform cloud-based testing on platforms such as Azure Data Factory, AWS Glue, or GCP Dataflow/BigQuery. Collaborate with data engineers, analysts, and DevOps teams to ensure seamless data flow and test coverage. Log, track, and manage defects through tools like JIRA, Azure DevOps, or similar. Participate in performance and volume testing for large-scale datasets. Required Skills and Qualifications: 3–7 years of experience in ETL/data warehouse testing. Strong hands-on experience in SQL (joins, CTEs, window functions, aggregation). Proficient in Python for automation scripting and data manipulation. Solid understanding of ETL tools such as Informatica, Talend, SSIS, or custom Python-based ETL. Experience with at least one Cloud Platform : Azure : Data Factory, Synapse, Blob Storage AWS : Glue, Redshift, S3 GCP : Dataflow, BigQuery, Cloud Storage Familiarity with data validation , data quality , and data profiling techniques. Experience with CI/CD tools such as Jenkins, GitHub Actions, or Azure DevOps. Excellent problem-solving, communication, and documentation skills. Preferred Qualifications: Knowledge of Apache Airflow , PySpark , or Databricks . Experience with containerization (Docker) and orchestration tools (Kubernetes). ISTQB or similar testing certification. Familiarity with Agile methodologies and Scrum ceremonies . Job Types: Part-time, Contractual / Temporary, Freelance Contract length: 6 months Pay: ₹18,074.09 - ₹86,457.20 per month Expected hours: 40 per week Benefits: Work from home

Posted 1 week ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Hello Everyone We are #hiring Position – Data Engineer Exp – 6-8 Years Location - Hyderabad, INDIA 🧠 Mongo DB – Mandatory Good understanding of Database systems -- SQL and No SQL Must have comprehensive experience in MongoDB or any other document DB 🔷 Responsibilities: ▪️ Design, build, and optimize data pipelines to ingest, process, transform, and load data from various sources into our data platform ▪️ Implement and maintain ETL workflows using tools like Debezium, Kafka, Airflow, and Jenkins to ensure reliable and timely data processing ▪️ Develop and optimize SQL and NoSQL database schemas, queries, and stored procedures for efficient data retrieval and processing ▪️ Work with both relational databases (MySQL, PostgreSQL) and NoSQL databases (MongoDB,DocumentDB) to build scalable data solutions ▪️ Design and implement data warehouse solutions that support analytical needs and machine learning applications ▪️ Collaborate with data scientists and ML engineers to prepare data for AI/ML models and implement data-driven features ▪️ Implement data quality checks, monitoring, and alerting to ensure data accuracy and reliability ▪️ Optimize query performance across various database systems through indexing, partitioning,and query refactoring ▪️ Develop and maintain documentation for data models, pipelines, and processes ▪️ Collaborate with cross-functional teams to understand data requirements and deliver solutions that meet business needs ▪️ Stay current with emerging technologies and best practices in data engineering 🔷 Requirements: ▫️ 6+ years of experience in data engineering or related roles with a proven track record of building data pipelines and infrastructure ▫️ Strong proficiency in SQL and experience with relational databases like MySQL and PostgreSQL ▫️ Hands-on experience with NoSQL databases such as MongoDB or AWS DocumentDB ▫️ Expertise in designing, implementing, and optimizing ETL processes using tools like Kafka,Debezium, Airflow, or similar technologies ▫️ Experience with data warehousing concepts and technologies ▫️ Solid understanding of data modeling principles and best practices for both operational and analytical systems ▫️ Proven ability to optimize database performance, including query optimization, indexing strategies, and database tuning ▫️ Experience with AWS data services such as RDS, Redshift, S3, Glue, Kinesis, and ELK stack 📬 How to Apply: 📩 Email your updated resume to: sandhyarani.p@nybinfotech.com

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Experience: 5 to 7 years Location: Bengaluru, Gurgaon, Pune About Us: AceNet Consulting is a fast-growing global business and technology consulting firm specializing in business strategy, digital transformation, technology consulting, product development, start-up advisory and fund-raising services to our global clients across banking & financial services, healthcare, supply chain & logistics, consumer retail, manufacturing, eGovernance and other industry sectors. We are looking for hungry, highly skilled and motivated individuals to join our dynamic team. If you’re passionate about technology and thrive in a fast-paced environment, we want to hear from you. Job Summary : We are seeking an experienced and motivated Data Engineer with a strong background in Python, PySpark, and SQL, to join our growing data engineering team. The ideal candidate will have hands-on experience with cloud data platforms, data modelling, and a proven track record of building and optimising large-scale data pipelines in agile environments. Key Responsibilities : *Design, develop, and maintain robust data pipelines using Python, PySpark, and SQL. *Strong understanding of data modelling. *Proficient in using code management tools such as Git and GitHub. *Strong knowledge of query performance tuning and optimisation techniques. Role Requirements and Qualifications: *5+ years' experience as a data engineer in complex data ecosystem. *Extensive experience working in an agile environment. *Experience with cloud data platforms like AWS Redshift, Databricks. *Excellent problem-solving and communication skills. Why Join Us: *Opportunities to work on transformative projects, cutting-edge technology and innovative solutions with leading global firms across industry sectors. *Continuous investment in employee growth and professional development with a strong focus on up & re-skilling. *Competitive compensation & benefits, ESOPs and international assignments. *Supportive environment with healthy work-life balance and a focus on employee well-being. *Open culture that values diverse perspectives, encourages transparent communication and rewards contributions.

Posted 1 week ago

Apply

7.0 - 12.0 years

15 - 20 Lacs

Bengaluru

Hybrid

Role & responsibilities - 7 years of experience in modeling and business system designs. - 5 years hands on experience in SQL and Informatica ETL development is must. - 3 years of Redshift or Oracle (or comparable database) experience with BI/DW deployments. - Must have proven experience with STAR and SNOWFLAKE schema techniques. - Development experience in minimum 1 year in Python scripting is mandatory. Having Unix scripting is an added advantage - Proven track record as an ETL developer in delivering successful business intelligence developments with complex data sources. - Strong analytical skills and enjoys solving complex technical problems.

Posted 1 week ago

Apply

4.0 years

15 - 25 Lacs

India

Remote

This role is for one of our clients Industry: Business Analyst Seniority level: Mid-Senior level Min Experience: 4 years Location: Remote (India) JobType: full-time About The Role We are on the lookout for a forward-thinking analytics leader who thrives at the intersection of data, strategy, and impact. As our Lead Data & Analytics Strategist, you’ll head a high-performing team to design and deliver transformative insights that shape client decisions, marketing outcomes, and growth trajectories. This is a hands-on, high-ownership role focused on creating data-driven frameworks, building intelligent systems, and driving adoption of advanced analytics across clients in a digital-first ecosystem. Key Responsibilities Strategic Leadership in Analytics Translate business objectives into scalable, insight-rich analytics strategies. Partner with senior stakeholders to co-create long-term data roadmaps. Lead analytics workshops and discovery sessions with clients. Team Management & Capability Building Mentor and upskill a team of analysts and data scientists. Promote best practices in modeling, data integrity, storytelling, and tooling. Set the benchmark for technical rigor, project accountability, and innovation. Advanced Analytics and Modeling Design and deploy predictive models, customer segmentation, and optimization engines. Lead initiatives like Market Mix Modeling, Incrementality Testing, Uplift Modeling, and LTV forecasting. Integrate machine learning into business workflows with measurable impact. Data Infrastructure & Governance Establish standards for data validation, transformation, and governance. Drive adoption of cloud-based architecture (BigQuery, RedShift, Snowflake). Collaborate with engineering to enhance ETL and data pipeline reliability. Client Advisory & Delivery Excellence Serve as the analytics point of contact for key accounts. Build tailored dashboards, visual narratives, and data products for decision-makers. Ensure on-time, high-quality delivery of projects with tangible ROI. Continuous Innovation & Research Stay ahead of analytics, martech, and data science trends. Experiment with emerging tools, from GenAI to real-time data engines. Advocate for a test-and-learn mindset across teams. What We’re Looking For 4–6+ years of experience in analytics, data science, or marketing analytics roles Prior Experience Leading Teams Or Owning End-to-end Analytics Functions Proficiency in Python, SQL , and data modeling techniques Deep familiarity with ML frameworks (e.g., Scikit-learn, TensorFlow) Visualization fluency in Tableau, Power BI , or similar tools Cloud-native mindset (experience with GCP/BigQuery , AWS/Redshift , etc.) Bonus: exposure to Airflow , Snowflake , or data engineering pipelines Strong communicator, capable of presenting technical work to non-technical stakeholders Strategic thinker with hands-on ability to execute Deep understanding of marketing and media analytics is a big plus

Posted 1 week ago

Apply

5.0 - 6.0 years

0 Lacs

Gurugram, Haryana, India

Remote

Job Title: Data Engineer Website: https://www.issc.co.in Location: Udyog Vihar, Phase-V, Gurugram Job type: Full-Time Employment Type: 5 Days Working( No Hybrid/Work from Home) Compensation: As per Industry Standards Address: Udyog Vihar, Phase – V, Gurgaon Company Overview: With the world constantly and rapidly changing, the future will be full of realigned priorities. You are keen to strengthen your firms profitability and reputation by retaining existing clients and winning more in the market. We at ISSC have the right resources to ensure your team has access to right skills to deliver effective assurance and IT Advisory whilst you build and scale your team onshore to meet the client’s broader assurance needs. By offshoring part of the routine and less complex auditing work to ISSC, you will free up capacity in your own organization which can be utilized in areas which requires more face time with your clients including your quest to win new clients. Having the right team on your side at ISSC will be vital as you follow your exciting growth plans and it is in this role your ISSC team stands apart. We offer a compelling case in becoming your key partner for the future. Position Summary: We are seeking a skilled and detail-oriented Data Engineer to join our team. As a Data Engineer, you will be responsible for developing and optimizing data pipelines, managing data architecture, and ensuring the data is easily accessible, reliable, and secure. You will work closely with data scientists, analysts, and other stakeholders to gather requirements and deliver data solutions that support business intelligence and analytics initiatives. The ideal candidate should possess strong data manipulation skills, a keen eye for detail, and the ability to work with diverse datasets. This role plays a crucial part in ensuring the quality and integrity of our data, enabling informed decision-making across the organization. Responsibilities: Data Pipeline Development: Design, develop, and maintain scalable data pipelines to process, transform, and move large datasets across multiple platforms. Ensure data integrity, reliability, and quality across all pipelines. Data Architecture and Infrastructure: Architect and manage the data infrastructure, including databases, warehouses, and data lakes. Implement solutions to optimize storage and retrieval of both structured and unstructured data. Data Integration and Management: Integrate data from various sources (e.g., APIs, databases, third-party providers) into a unified system. Manage ETL (Extract, Transform, Load) processes to clean, enrich, and make data ready for analysis. Data Security and Compliance: Ensure data governance, privacy, and compliance with security standards (e.g., GDPR, HIPAA). Implement robust access controls and encryption protocols. Collaboration: Work closely with data scientists, analysts, and business stakeholders to gather requirements and deliver high-performance data solutions. Collaborate with DevOps and software engineering teams to deploy and maintain the data infrastructure in a cloud or on-premises environment. Performance Tuning: Monitor and improve the performance of databases and data pipelines to ensure low-latency data availability. Troubleshoot and resolve issues in the data infrastructure. Documentation and Best Practices: Maintain detailed documentation of data pipelines, architecture, and processes. Follow industry best practices for data engineering, including version control and continuous integration. Skills/ Requirements: Technical Skills: Proficiency in programming languages such as Python, or SQL. Good experience with big data technologies like Apache Spark, Hadoop, Kafka, Flink, etc. Experience with cloud data platforms (AWS, Azure). Familiarity with databases (SQL and NoSQL), data warehousing solutions (e.g., Snowflake, Redshift), and ETL tools (e.g., Airflow, Talend). Data Modeling and Database Design: Expertise in designing data models and relational database schemas. Problem-Solving: Strong analytical and problem-solving skills, with the ability to handle complex data issues. Version Control and Automation: Experience with CI/CD pipelines and version control tools like Git. Professional Qualifications: • 5 - 6 years of relevant experience. • BTech, Statistics, Information Technology, or a related field. Other Benefits: • Free Meal • 1 Happy Hour Every week • 3 Offsite in a year • 1 Spa every week

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Hello Everyone, Greetings from the Creditsafe Technology. Please find the below JD for your reference. About us Creditsafe is the most used business data provider in the world, reducing risk and maximizing opportunities for our 110,000 business customers. Our journey began in Oslo, Norway in 1997, where we had a dream of using the then revolutionary internet to deliver instant access company credit reports to small and medium-sized businesses. Creditsafe realized this dream and changed the market for the better for businesses of all sizes. From there, we opened 15 more offices throughout Europe, the USA and Asia. We provide data on more than 300 million companies and provide customer notifications for billions of changes annually. We are a high growth company offering the freedom and flexibility of a start-up type culture due to the continuous innovation and new product development performed, coupled with the stability of being a profitable and growing company! With such a large customer base and breadth of data and analytics technology you will have real opportunities to help companies survive and thrive in challenging times by reducing business risk and choosing trustworthy customers and suppliers . About the Opportunity: We are looking for a Test Engineer who will become part of our team building and testing the Creditsafe data. You will be working closely with the database teams and data engineering to build specific systems facilitating the extraction and transformation of Creditsafe data. Based on the test strategy and approach you will develop, enhance and execute tests that add value to Creditsafe data. You will act as a primary source of guidance to Junior Test Engineers and Test Engineers in all areas of data quality. You will contribute to the team using data quality best practices and techniques. You can confidently communicate test results with your team members and stakeholders using evidence and reports. You act as a mentor and coach to the less experienced members of the test team. You will promote and coach leading practices in data test management, design, and implementation. You will be part of an Agile team and will effectively contribute to the ceremonies, acting as the quality specialist within that team. You are an influencer and will provide leadership in defining and implementing agreed standards and will actively promote this within your team and the wider development community. The ideal candidate has extensive experience in mentorship and leading by example and is able to communicate values consistent with the Creditsafe philosophy of engagement. You have critical thinking skills and can diplomatically communicate within, and outside their areas of responsibility, challenging assumptions where required. Required Skills : Proven working experience as a data test engineer or business data analyst or ETL tester Technical expertise regarding data models, database design development, data mining and segmentation techniques Strong knowledge of and experience with SQL databases Hands on experience of best engineering practices (handling and logging errors, system monitoring and building human-fault-tolerant applications) Knowledge of statistics and experience using statistical packages for analysing datasets (Excel, SPSS, SAS etc.) is an advantage. Comfortable working with relational databases such as Redshift, Oracle, PostgreSQL, MySQL, and MariaDB (PostgreSQL preferred) HR Documentation Public Access CS Internal Strong analytical skills with the ability to collect, organise, analyse, and disseminate significant amounts of information with attention to detail and accuracy Adept at queries, report writing and presenting findings. BS in Mathematics, Economics, Computer Science, Information Management or Statistics is desirable but not essential A good understanding of cloud technology, preferably AWS and/or Azure DevOps A practical understanding of programming: JavaScript, Python Excellent communication skills Practical experience of testing in an Agile approach Primary Responsibilities : Reports to Engineering Lead Work as part of the engineering team in data acquisition Designing and implementing processes and tools to monitor and improve the quality of Creditsafe's data. Developing and executing test plans to verify the accuracy and reliability of data. Working with data analysts and other stakeholders to establish and maintain data governance policies. Identifying and resolving issues with the data, such as errors, inconsistencies, or duplication. Collaborating with other teams, such as data analysts and data scientists to ensure the quality of data used for various projects and initiatives. Providing training and guidance to other team members on data quality best practices and techniques. Monitoring and reporting on key data quality metrics, such as data completeness and accuracy. Continuously improving data quality processes and tools based on feedback and analysis. Work closely with their Agile team to promote a whole team approach to quality Documents approaches and processes that improve the quality effort for use by team members and the wider test function Strong practical knowledge of software testing techniques and the ability to advise on, and select, the correct technique dependent on the problem at hand Conducts analysis of the team's test approach, taking a proactive role in the formulation of the relevant quality criteria in line with the team goals Work with team members to define standards and processes applicable to their area of responsibility Monitor progress of team deliverables, injecting quality concerns in a timely, effective manner Gain a sufficient understanding of the system architecture to inform their test approach and that of the test engineers Creation and maintenance of concise and accurate defect reports in line with the established defect process Why Creditsafe? Career Growth: Clear progression paths with opportunities to take on technical leadership roles or explore adjacent areas of interest. Continuous Learning: Access to extensive learning resources, with dedicated time each week for skill development. Work-Life Balance: Flexible working hours and hybrid work options to support a balanced lifestyle. Innovation: Be part of a dynamic environment where cutting-edge technologies in cloud computing and data management are at the forefront of our projects. Company Benefits : Competitive Salary Hybrid Mode Pension Medical Insurance Cab facility for Women Dedicated Gaming Area

Posted 1 week ago

Apply

10.0 years

0 Lacs

Chandigarh, India

On-site

We are seeking a highly experienced and hands-on Fullstack Architect to lead the design and architecture of scalable, enterprise-grade software solutions. This role requires a deep understanding of both Frontend and Backend technologies, cloud infrastructure, and microservices, with the ability to guide teams through technical challenges and solution delivery. Key Responsibilities Architect, design, and oversee the development of full-stack applications using modern JS frameworks and cloud-native tools. Lead microservice architecture design, ensuring system scalability, reliability, and performance. Evaluate and implement AWS services (Lambda, ECS, Glue, Aurora, API Gateway, etc.) for backend solutions. Provide technical leadership to engineering teams across all layers (frontend, backend, database). Guide and review code, perform performance optimization, and define coding standards. Collaborate with DevOps and Data teams to integrate services (Redshift, OpenSearch, Batch). Translate business needs into technical solutions and communicate with cross-functional stakeholders. Required Skills Deep expertise in Node.js , TypeScript , React.js , Python , Redux , and Jest . Proven experience designing and deploying systems using Microservices architecture . Strong understanding of AWS services: API Gateway, ECS, Lambda, Aurora, Glue, SQS, OpenSearch, Batch. Hands-on with MySQL , Redshift , and writing optimized queries. Advanced knowledge of HTML, CSS, Bootstrap, JavaScript . Familiarity with tools: VS Code , DataGrip , Jira , GitHub , Postman . Strong knowledge of architectural design patterns and security best practices. Preferred Experience working in fast-paced product development or startup environments. Strong communication and mentoring skills. Education & Experience: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. 6–10 years of experience in full-stack development, with at least 4 years in an architectural or senior technical leadership role. How to Apply: Please send your updated CV to hiring@acmeminds.com , clearly mentioning the Job Code FSARCH-25 in the subject line of the email. e.g., [Subject: Application for Job Code: FSARCH-25] Location: IT Park, Chandigarh

Posted 1 week ago

Apply

0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Job Description: Business Analyst, Business Intelligence Bloom Energy faces an unprecedented opportunity to change the world and how energy is generated and delivered. Our mission is to make clean, reliable energy affordable globally. Bloom’s Energy Server delivers highly reliable, resilient, always-on electric power that is clean, cost-effective, and ideal for microgrid applications. We are helping our customers power their operations without disruption and combustion. We seek an Business Analyst to join our team in one of today’s most exciting technologies. This role would report to the Business Intelligence Senior Manager in Mumbai, India. Responsibilities: Develop automated tools and dashboards for various P&L line items to improve visibility and accuracy of the data Work closely with Leadership team to improve forecasting tools and provide accurate P&L forecast Work closely with finance team to monitor actuals versus forecast during the quarter Support ad hoc requests for data analysis and scenarios planning from operations team Deep dive into our costs and provide insights to the leadership team for increasing profitability Work closely with IT team to support development production ready tools for automating Services P&L Requirements: Strong analytical skills and problem solving skills Proficiency with Python, Excel and Powerpoint a must. Experience in financial planning & forecasting a plus Proficiency with dashboarding tools like Tableau etc. Familiarity with databases / datalakes (e.g., PostgreSQL, Cassandra, AWS RDS, Redshift, S3) Experience with Git or other version control software Education: Bachelor’s degree in Business Management, Data Analytics, Computer Science, Industrial Engineering or related fields About Bloom Energy: At Bloom Energy, we support a 100% renewable future. Our fuel-flexible technology offers one of the most resilient electricity solutions for a world facing unacceptable power disruptions. Our resilient platform has proven itself by powering through hurricanes, earthquakes, forest fires, extreme heat, and utility failures. Unlike backup generators, our fuel cells create no harmful local air pollutants. At the same time, Bloom is at the forefront of the transition to renewable fuels like hydrogen and biogas with new hydrogen power generation and electrolyzer solutions. Our customers include but are not limited to: manufacturing, data centers, healthcare, retail, low-income housing, colleges, and more! For more information, visit www.bloomenergy.com.

Posted 1 week ago

Apply

12.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Organization: Leading Global Management Consulting Organization( One of the BIG 3 Consulting Organization) Role :- Sr Data Architect Experience: - 10+ Yrs WHAT YOULL DO Define and design future state data architecture for HR reporting, forecasting and analysis products. Partner with Technology, Data Stewards and various Products teams in an Agile work stream while meeting program goals and deadlines. Engage with line of business, operations, and project partners to gather process improvements. Lead to design / build new models to efficiently deliver the financial results to senior management. Evaluate Data related tools and technologies and recommend appropriate implementation patterns and standard methodologies to ensure our Data ecosystem is always modern. Collaborate with Enterprise Data Architects in establishing and adhering to enterprise standards while also performing POCs to ensure those standards are implemented. Provide technical expertise and mentorship to Data Engineers and Data Analysts in the Data Architecture. Develop and maintain processes, standards, policies, guidelines, and governance to ensure that a consistent framework and set of standards is applied across the company. Create and maintain conceptual / logical data models to identify key business entities and visual relationships. Work with business and IT teams to understand data requirements. Maintain a data dictionary consisting of table and column definitions. Review data models with both technical and business audiences. YOU’RE GOOD AT Design, document & train the team on the overall processes and process flows for the Data architecture. Resolve technical challenges in critical situations that require immediate resolution. Develop relationships with external stakeholders to maintain awareness of data and security issues and trends. Review work from other tech team members and provide feedback for growth. Implement Data security policies that align with governance objectives and regulatory requirements. YOU BRING (EXPERIENCE & QUALIFICATIONS) Essential Education Minimum of a Bachelor's degree in Computer science, Engineering or a similar field Additional Certification in Data Management or cloud data platforms like Snowflake preferred Essential Experience & Job Requirements 12+ years of IT experience with major focus on data warehouse/database related projects Expertise in cloud databases like Snowflake, Redshift etc. Expertise in Data Warehousing Architecture; BI/Analytical systems; Data cataloguing; MDM etc Proficient in Conceptual, Logical, and Physical Data Modelling Proficient in documenting all the architecture related work performed. Proficient in data storage, ETL/ELT and data analytics tools like AWS Glue, DBT/Talend, FiveTran, APIs, Tableau, Power BI, Alteryx etc Experience in building Data Solutions to support Comp Benchmarking, Pay Transparency / Pay Equity and Total Rewards use cases preferred. Experience with Cloud Big Data technologies such as AWS, Azure, GCP and Snowflake a plus Experience working with agile methodologies (Scrum, Kanban) and Meta Scrum with cross-functional teams (Product Owners, Scrum Master, Architects, and data SMEs) a plus Excellent written, oral communication and presentation skills to present architecture, features, and solution recommendations is a must

Posted 1 week ago

Apply

45.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Who we are: GMG is a global well-being company retailing, distributing and manufacturing a portfolio of leading international and home-grown brands across sport, everyday goods, health and beauty, properties and logistics sectors. Under the ownership and management of the Baker family for over 45 years, GMG is a valued partner of choice for the world's most successful and respected brands in the well-being sector. Working across the Middle East, North Africa, and Asia, GMG has introduced more than 120 brands across 12 countries. These include notable home-grown brands such as Sun & Sand Sports, Dropkick, Supercare Pharmacy, Farm Fresh, Klassic, and international brands like Nike, Columbia, Converse, Timberland, Vans, Mama Sita's, and McCain. What you'll be doing: Seeking a highly motivated and experienced Senior BI Developer to design, develop, and maintain enterprise-grade Business Intelligence solutions that empower GMG’s diverse retail portfolio spanning Sports, Healthcare, Everyday Goods, and Outdoor businesses. This role is pivotal in transforming complex, multi-source retail data into intelligent, actionable insights that drive business performance and customer-centric decision-making. This profile will work closely with cross-functional stakeholders across the group to translate business requirements into scalable, efficient BI solutions. The role extends beyond traditional reporting while leveraging AI-powered analytics to be used across all business verticals. The ideal candidate will be hands-on with industry-leading BI tools such as Power BI, Tableau, and Looker, with expertise to use AI capabilities for developing dashboards and reports. Data will be sourced from a robust ecosystem including Microsoft SQL Server, Amazon Redshift, Databricks, Customer Data Platforms (CDPs), Google Analytics 4 etc. This role requires a deep understanding of retail KPIs, and AI-driven insight delivery to fuel strategic initiatives across GMG’s multi-billion-dollar operations. You will play a key role in enabling data-driven culture and enhancing business impact through intelligent, next-generation dashboards and automation. Job Description: 1. Dashboards, Reports, and Data Visualization: Design and implement interactive BI dashboards in tools such as Power BI, Tableau, or Looker leveraging AI to supercharge execution. Collaborate with data scientists to embed machine learning models into dashboards (e.g. demand forecasting, price elasticity, inventory optimization etc.). Customize visualizations to align with business KPIs, usability standards, and business analysis goals. Drive implementation of UI/UX best practices in dashboard design, with a strong focus on mobile-responsive layouts for optimal user experience. 2. Query and Analysis: Write, review, and optimize complex SQL queries across multiple data sources. Implement Natural Language Processing (NLP) features such as, NLG-based summaries, and chatbot-style interfaces that answer business questions. 3. BI Platform Administration: Lead innovation by introducing new BI/AI technologies, tools, and frameworks. Oversee administration, configuration, and upgrades of BI tools. Monitor system health, manage user access, and ensure performance tuning. Establish governance policies for BI environments and metadata management. 4. Advanced Data Modeling and ETL: Review the design and implementation of data models and ETL processes in collaboration with data engineering team to get scalable and optimized data set for BI platforms to be used for enterprise level data analysis. Ensure data quality, consistency, and optimized data architecture and performance. 5. Stakeholder Collaboration and Requirements Gathering: Engage with business leaders to understand strategic needs and translate them into scalable BI solutions. Prioritize BI projects, ensuring timely delivery and alignment with business goals. Train business users and advocate for data-driven decision-making. 6. Documentation: Document data models and business rules in Business Requirement Documents (BRDs) and relevant organizational process assets. Conduct end users training and knowledge-sharing sessions with team. 7. Data Governance, Security and Compliance: Implement role-based access control, data masking, and other data protection mechanisms in compliance with internal and external regulations. Collaborate with data governance teams to uphold data privacy and integrity standards. Experience: Master or bachelor’s degree in STEM or equivalent. Hands-on expertise of implementing BI dashboards through AI. Skilled in designing user-friendly dashboards with AI-enhanced insights and natural language summaries. 8 to 10 Years of overall work experience and 7 to 9 years relevant experience of Power BI/Tableau/Looker implementation preferably in retail domain. Job-specific skills: Advanced expertise in Power BI, Tableau, or Looker. Hands-on experience with AWS, Google Cloud or Azure for BI/AI workloads. Expertise in NLP/NLG integration, embedding ML Models and BI/AI concepts with best practices. Strong SQL skills for data retrieval and manipulation. Hands on experience with ETL processes and tools. Problem-solving and critical-thinking abilities. Excellent communication and presentation skills. Any AI or BI certification would be preferred. Why Join GMG? At GMG, we're dedicated to nurturing a vibrant, inclusive, and engaging work environment that promotes growth, innovation, and well-being. Join us in our mission to inspire victories that make the world better – for our team, our consumers, and our communities. If you're seeking a challenging role where you can make a significant impact, we'd love to hear from you. Apply today to become a part of our journey. What we offer An opportunity to become part of diverse teams with international exposure Comprehensive family medical insurance Family residency sponsorship and flight allowances Up to 30% discount in our premium retail sports brand stores Up to 20% discount in our pharmacy chain

Posted 1 week ago

Apply

5.0 - 7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Requirements Minimum of 5-7 years of experience in a data science role, with a focus on building and deploying models in production Advanced skills in feature engineering and selection Proficiency in Python and SQL for data analysis and modelling Strong understanding of machine learning algorithms and statistical modelling techniques Experience with AWS cloud services, particularly Redshift for data storage and analysis, and SageMaker for model deployment Excellent communication and leadership skills Strong problem-solving and critical thinking abilities with a keen attention to detail Proven track record of successfully leading and delivering data science projects within a fast-paced environment is a must Proactive mindset and strong coding skills Nice-to-Have Experience with big data technologies and frameworks (e.g., Hadoop, Spark) Familiarity with deep learning frameworks (e.g., TensorFlow, PyTorch) Knowledge of data visualization tools and techniques for communicating insights to non-technical stakeholders Primary Skills: Data Science, Python, AWS Cloud Services, SQL

Posted 1 week ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About Saras Analytics: We are an ecommerce focused end to end data analytics firm assisting enterprises & brands in data driven decision making to maximize business value. Our suite of work spans extraction, transformation, visualization & analysis of data delivered via industry leading products, solutions & services. Our flagship product is Daton, an ETL tool. We have now ventured into building exciting ease of use data visualization solutions on top of Daton. And lastly, we have a world class data team which understands the story the numbers are telling and articulates the same to CXOs thereby creating value. What Are We Today: We are a boot strapped, profitable & fast growing (2x y-o-y) startup with old school value systems. We play in a very exciting space which is intersection of data analytics & ecommerce both of which are game changers. Today, the global economy faces headwinds forcing companies to downsize, outsource & offshore creating strong tail winds for us. We are an employee first company valuing talent & encouraging talent and live by those values at all stages of our work without comprising on the value we create for our customers. We strive to make Saras a career and not a job for talented folks who have chosen to work with us. The Role: A Business Analyst (BA) in Saras Analytics plays a pivotal role in understanding and translating business needs into technical requirements. The BA collaborates with stakeholders to gather and analyse data, identify trends, and contribute to the development and improvement of solutions. This role requires a combination of business acumen and technical proficiency to enhance the efficiency and effectiveness of the ecommerce platform. Technical Requirements (Mandatory) • 5+ years of experience in relevant job roles • Excellent Communication skills and analytical skills • Excellent data interpretation skills, with proven track record deriving actionable business insights from large data sets • Proficiency in client communication and stakeholder management • Advanced general aptitude, problem solving and critical thinking ability • Experience creating executive presentations in PowerPoint • Experience performing cost-benefit analyses and creating proposals. • Working knowledge of SDLC lifecycle and collaborating with product teams. Understanding of : Data Manipulation tools – SQL and Excel • Visualization tools – At least one of PowerBI, Tableau, DataStudio/Looker • Cloud databases – BigQuery, Snowflake, Redshift or Azure • Working knowledge of Agile practices and sprint management in tools like JIRA or Asana. Behavioural Expectations (Mandatory) Individual traits essential to be successful in this role – • Self-motivation to work with abstract guidance. • Proactiveness to add business value. • Empathy to deal with multiple external and internal stakeholders. • Adaptability to work in cross-functional and high-performing teams. • Prioritization to visualize the big picture and maximize impact of outputs. • Perform RCAs and identify new data driven opportunities for adding business value or improving process productivity. • Communicate effectively about delivery: requirements, timelines, dependencies, priorities, etc. ensuring timely delivery without compromising on quality standards. • Act as an SME by leading trainings for business stakeholders and mentoring juniors as needed. Other Requirements (Preferred) • Bachelor’s or master’s degree in relevant fields like MBA, Computer Science Engineering, Finance, Statistics from reputed academic institutions • 2+ years of experience working in eCommerce or digitally native organizations. • Conceptual understanding of eCommerce/Omnichannel tools and platforms • Sales Channels – Shopify, Amazon, Walmart • Marketing – Customer Purchase Cycle • Analytics – Google Analytics Benefits ● Competitive salary and performance-based bonuses. ● Comprehensive health insurance, relocation benefits and other allowances ● Professional development opportunities and continued learning ● A collaborative and innovative work environment

Posted 1 week ago

Apply

0.0 - 8.0 years

0 Lacs

Chennai, Tamil Nadu

On-site

Designation: Assistant Manager Experience: 5 to 8 years Location: Chennai, Tamil Nadu, India (CHN) Job Description: +5 years of experience working in web, product, marketing, or other related analytics fields to solve for marketing/product business problems +4 years of experience in designing and executing experiments (A/B and multivariate) with a deep understanding of the stats behind hypothesis testing Proficient in alternative A/B testing methods like DiD, Synthetic control and other causal inference techniques +5 years of technical proficiency in SQL, Python or R and data visualization tools like tableau +5 years of experience in manipulating and analyzing large complex datasets (e.g. clickstream data), constructing data pipelines (ETL) and working on big data technologies (e.g., Redshift, Spark, Hive, BigQuery) and solutions from cloud platforms and visualization tools like Tableau +3 years of experience in web analytics, analyzing website traffic patterns and conversion funnels +5 years of experience in building ML models (eg: regression, clustering, trees) for personalization applications Demonstrate ability to drive strategy, execution and insights for AI native experiences across the development lifecycle (ideation, discovery, experimentation, scaling) Outstanding communication skills with both technical and non-technical audiences Ability to tell stories with data, influence business decisions at a leadership level, and provide solutions to business problems Ability to manage multiple projects simultaneously to meet objectives and key deadlines Responsibilities: Drive measurement strategy and lead E2E process of A/B testing for areas of web optimization such as landing pages, user funnel, navigation, checkout, product lineup, pricing, search and monetization opportunities Analyze web user behavior at both visitor and session level using clickstream data by anchoring to key web metrics and identify user behavior through engagement and pathing analysis Leverage AI/GenAI tools for automating tasks and building custom implementations Use data, strategic thinking and advanced scientific methods including predictive modeling to enable data-backed decision making for Intuit at scale Measure performance and impact of various product releases Demonstrate strategic thinking and systems thinking to solve business problems and influence strategic decisions using data storytelling. Partner with GTM, Product, Engineering, Design, Engineering teams to drive analytics projects end to end Build models to identify patterns in traffic and user behavior to inform acquisition strategies and optimize for business outcomes Skills: 5 to 8 years in the DA domain web, product, marketing, A/B testing methods like DiD, Synthetic control, constructing data pipelines (ETL), big data technologies (e.g., Redshift, Spark, Hive, BigQuery),SQL, Python or R and tableau, web analytics, analyzing website traffic patterns and conversion funnels, ML models (eg: regression, clustering, trees), Managerial skills Job Snapshot Updated Date 24-07-2025 Job ID J_3934 Location Chennai, Tamil Nadu, India Experience 5 - 8 Years Employee Type Permanent

Posted 1 week ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka

On-site

At Takeda, we are guided by our purpose of creating better health for people and a brighter future for the world. Every corporate function plays a role in making sure we — as a Takeda team — can discover and deliver life-transforming treatments, guided by our commitment to patients, our people and the planet. People join Takeda because they share in our purpose. And they stay because we’re committed to an inclusive, safe and empowering work environment that offers exceptional experiences and opportunities for everyone to pursue their own ambitions. Job ID R0158759 Date posted 07/24/2025 Location Bengaluru, Karnataka I understand that my employment application process with Takeda will commence and that the information I provide in my application will be processed in line with Takeda’sPrivacy Noticeand Terms of Use. I further attest that all information I submit in my employment application is true to the best of my knowledge. Job Description The Future Begins Here At Takeda, we are leading digital evolution and global transformation. By building innovative solutions and future-ready capabilities, we are meeting the need of patients, our people, and the planet. Bengaluru, the city, which is India’s epicenter of Innovation, has been selected to be home to Takeda’s recently launched Innovation Capability Center. We invite you to join our digital transformation journey. In this role, you will have the opportunity to boost your skills and become the heart of an innovative engine that is contributing to global impact and improvement. At Takeda’s ICC we Unite in Diversity Takeda is committed to creating an inclusive and collaborative workplace, where individuals are recognized for their backgrounds and abilities they bring to our company. We are continuously improving our collaborators journey in Takeda, and we welcome applications from all qualified candidates. Here, you will feel welcomed, respected, and valued as an important contributor to our diverse team. The Opportunity: The Data Enginee r will work directly with architects and product owners on the delivery of data pipelines and platforms for structured and unstructured data as part of a transformational data program. This data program will include an integrated data flow with end-to end control of data, internalization of numerous systems and processes, broad enablement of automation and near-time data access, efficient data review and query, and enablement of disruptive technologies for next-generation trial designs and insight derivation. We are primarily looking for people who love taking complex data and making it easy to use. As a Data Engineer you will Provide leadership to develop and execute highly complex and large-scale data structures and pipelines to organize, collect and standardize data to generate insights and addresses reporting needs. Interpret and integrate advanced techniques to ingest structured and unstructured data across complex ecosystem Delivery & Business Accountabilities Build and maintain technical solutions required for optimal ingestion, transformation, and loading of data from a wide variety of data sources and large, complex data sets with a focus on clinical and operational data Develop data profiling and data quality methodologies and embed them into the processes involved in transforming data across the systems. Manages and influences the data pipeline and analysis approaches, uses different technologies, big data preparations, programming and loading as well as initial exploration in the process of searching and finding data patterns. Uses data science input and requests, translates these from data exploration - large record (billions) and unstructured data sets - to mathematic algorithms and uses various tooling from programming languages to new tools (artificial and machine learning) to find data patterns, build and optimize models. Leads and implements ongoing tests in the search for solutions in data modelling, collects and prepares the training of data, tunes the data, optimizes algorithm implementations to test, scale, and deploy future models. Conducts and facilitates analytical assessment conceptualizing business needs and translates them into analytical opportunities. Leads the development of technical roadmaps and approaches for data analyses to find patterns, to design data models, to scale the model to a managed production environment within the current or a technical landscape to develop. Influences and manages data exploration from analysis to scalable models, works independently and decides quickly on transfers in complex data analysis and modelling. Skills and Qualifications: Bachelor’s degree or higher in a quantitative discipline such as Statistics, Mathematics, Engineering, Computer Science, Econometrics or information sciences such as business analytics or informatics 5+ years of experience working in data engineering role in an enterprise environment Strong experience with ETL/ELT design and implementations in the context of large, disparate and complex datasets Demonstrated experience with a variety of relational database and data warehousing technology such as AWS Redshift, Athena, RDS, BigQuery Demonstrated experience with big data processing systems and distributed computing technology such as Databricks, Spark, Sagemaker, Kafka, Tidal/Airflow etc. Demonstrated experience with DevOps tools such as GitLab, Terraform, Ansible, Chef etc. Experience with developing solutions on cloud computing services and infrastructure in the data and analytics space Solution-oriented enabler mindset Prior experience with Data Engineering projects and teams at an Enterprise level Preferred : Understanding or Application of Machine Learning and / or Deep Learning Significant experience in an analytical role in the healthcare industry preferred WHAT TAKEDA ICC INDIA CAN OFFER YOU: Takeda is certified as a Top Employer, not only in India, but also globally. No investment we make pays greater dividends than taking good care of our people. At Takeda, you take the lead on building and shaping your own career. Joining the ICC in Bengaluru will give you access to high-end technology, continuous training and a diverse and inclusive network of colleagues who will support your career growth. BENEFITS: It is our priority to provide competitive compensation and a benefit package that bridges your personal life with your professional career. Amongst our benefits are: Competitive Salary + Performance Annual Bonus Flexible work environment, including hybrid working Comprehensive Healthcare Insurance Plans for self, spouse, and children Group Term Life Insurance and Group Accident Insurance programs. Employee Assistance Program Broad Variety of learning platforms Diversity, Equity, and Inclusion Programs Reimbursements – Home Internet & Mobile Phone Employee Referral Program Leaves – Paternity Leave (4 Weeks), Maternity Leave (up to 26 weeks), Bereavement Leave (5 days) ABOUT ICC IN TAKEDA: Takeda is leading a digital revolution. We’re not just transforming our company; we’re improving the lives of millions of patients who rely on our medicines every day. As an organization, we are committed to our cloud-driven business transformation and believe the ICCs are the catalysts of change for our global organization. #Li-Hybrid Locations IND - Bengaluru Worker Type Employee Worker Sub-Type Regular Time Type Full time

Posted 1 week ago

Apply

0.0 - 8.0 years

0 Lacs

Bengaluru, Karnataka

On-site

About Us Observe.AI is transforming customer service with AI agents that speak, think, and act like your best human agents—helping enterprises automate routine customer calls and workflows, support agents in real time, and uncover powerful insights from every interaction. With Observe.AI, businesses boost automation, deliver faster, more consistent 24/7 service and build stronger customer loyalty. Trusted by brands like Accolade, Prudential, Concentrix, Cox Automotive, and Included Health, Observe.AI is redefining how businesses connect with customers—driving better experiences and lasting relationships at every touchpoint. The Opportunity We are looking for a Senior Data Engineer with strong hands-on experience in building scalable data pipelines and real-time processing systems. You will be part of a high-impact team focused on modernizing our data architecture, enabling self-serve analytics, and delivering high-quality data products. This role is ideal for engineers who love solving complex data challenges, have a growth mindset, and are excited to work on both batch and streaming systems. What you’ll be doing: Build and maintain real-time and batch data pipelines using tools like Kafka, Spark, and Airflow. Contribute to the development of a scalable LakeHouse architecture using modern data formats such as Delta Lake, Hudi, or Iceberg. Optimize data ingestion and transformation workflows across cloud platforms (AWS, GCP, or Azure). Collaborate with Analytics and Product teams to deliver data models, marts, and dashboards that drive business insights. Support data quality, lineage, and observability using modern practices and tools. Participate in Agile processes (Sprint Planning, Reviews) and contribute to team knowledge sharing and documentation. Contribute to building data products for inbound (ingestion) and outbound (consumption) use cases across the organization. Who you are: 5-8 years of experience in data engineering or backend systems with a focus on large-scale data pipelines. Hands-on experience with streaming platforms (e.g., Kafka) and distributed processing tools (e.g., Spark or Flink). Working knowledge of LakeHouse formats (Delta/Hudi/Iceberg) and columnar storage like Parquet. Proficient in building pipelines on AWS, GCP, or Azure using managed services and cloud-native tools. Experience in Airflow or similar orchestration platforms. Strong in data modeling and optimizing data warehouses like Redshift, BigQuery, or Snowflake. Exposure to real-time OLAP tools like ClickHouse, Druid, or Pinot. Familiarity with observability tools such as Grafana, Prometheus, or Loki. Some experience integrating data with MLOps tools like MLflow, SageMaker, or Kubeflow. Ability to work with Agile practices using JIRA, Confluence, and participating in engineering ceremonies. Compensation, Benefits and Perks Excellent medical insurance options and free online doctor consultations Yearly privilege and sick leaves as per Karnataka S&E Act Generous holidays (National and Festive) recognition and parental leave policies Learning & Development fund to support your continuous learning journey and professional development Fun events to build culture across the organization Flexible benefit plans for tax exemptions (i.e. Meal card, PF, etc.) Our Commitment to Inclusion and Belonging Observe.AI is an Equal Employment Opportunity employer that proudly pursues and hires a diverse workforce. Observe AI does not make hiring or employment decisions on the basis of race, color, religion or religious belief, ethnic or national origin, nationality, sex, gender, gender identity, sexual orientation, disability, age, military or veteran status, or any other basis protected by applicable local, state, or federal laws or prohibited by Company policy. Observe.AI also strives for a healthy and safe workplace and strictly prohibits harassment of any kind. We welcome all people. We celebrate diversity of all kinds and are committed to creating an inclusive culture built on a foundation of respect for all individuals. We seek to hire, develop, and retain talented people from all backgrounds. Individuals from non-traditional backgrounds, historically marginalized or underrepresented groups are strongly encouraged to apply. If you are ambitious, make an impact wherever you go, and you're ready to shape the future of Observe.AI, we encourage you to apply.

Posted 1 week ago

Apply

0.0 - 3.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Job description Schneider Electric is looking for AWS data cloud engineer with min experience of 5 years in AWS data lake implementation. Responsible for creating/managing data ingestion, transformation, making data ready for consumption in analytical layer of data lake. Also responsible for managing/monitoring data quality of data lake using informatica power center. Also responsible for creating dashboards from analytical layer of data lake using Tableau or Power BI. Your Role We are looking for strong AWS Data Engineers who are passionate about Cloud technology. Your responsibilities are: Design and Develop Data Pipelines: Create robust pipelines to ingest, process, and transform data, ensuring it is ready for analytics and reporting. Implement ETL/ELT Processes: Develop Extract, Transform, Load (ETL) or Extract, Load, Transform (ELT) workflows to seamlessly move data from source systems to Data Warehouses, Data Lakes, and Lake Houses using Open Source and AWS tools. Implement data quality rules, perform data profiling to assess the source data quality, identify data anomalies, and create data quality scorecards using Informatica PowerCenter. Design Data Solutions: Leverage your analytical skills to design innovative data solutions that address complex business requirements and drive decision-making. Interact with product owners to understand the needs of data ingestion, data quality rules. Adopt DevOps Practices: Utilize DevOps methodologies and tools for continuous integration and deployment (CI/CD), infrastructure as code (IaC), and automation to streamline and enhance our data engineering processes. Optional skill. Qualifications Your Skills and Experience Min of 3 to 5 years of experience in AWS Data Lake implementation. Min of 2 to 3 years of knowledge in Informatica PowerCenter. Proficiency with AWS Tools: Demonstrable experience using AWS Glue, AWS Lambda, Amazon Kinesis, Amazon EMR, Amazon Athena, Amazon DynamoDB, Amazon Cloudwatch, Amazon SNS and AWS Step Functions. Understanding of Spark, Hive, Kafka, Kinesis, Spark Streaming, and Airflow. Understanding of relational databases like Oracle, SQL Server, MySQL Programming Skills: Strong experience with modern programming languages such as Python and Java. Expertise in Data Storage Technologies: In-depth knowledge of Data Warehouse, Database technologies, and Big Data Eco-system technologies such as AWS Redshift, AWS RDS, and Hadoop. Experience with AWS Data Lakes: Proven experience working with AWS data lakes on AWS S3 to store and process both structured and unstructured data sets. Expertise in developing Business Intelligence dashboards in Tableau, Power BI is a plus. Good knowledge on project and portfolio management suite of tools is a plus. Should be well versed with Agile principle of implementation. Having Safe Agile principles is a plus. About Us Schneider Electric™ creates connected technologies that reshape industries, transform cities and enrich lives. Our 144,000 employees thrive in more than 100 countries. From the simplest of switches to complex operational systems, our technology, software and services improve the way our customers manage and automate their operations. Help us deliver solutions that ensure Life Is On everywhere, for everyone and at every moment: https://youtu.be/NlLJMv1Y7Hk. Great people make Schneider Electric a great company. We seek out and reward people for putting the customer first, being disruptive to the status quo, embracing different perspectives, continuously learning, and acting like owners. We want our employees to reflect the diversity of the communities in which we operate. We welcome people as they are, creating an inclusive culture where all forms of diversity are seen as a real value for the company. We’re looking for people with a passion for success — on the job and beyond. See what our people have to say about working for Schneider Electric: https://youtu.be/6D2Av1uUrzY Our EEO statement : Schneider Electric aspires to be the most inclusive and caring company in the world, by providing equitable opportunities to everyone, everywhere, and ensuring all employees feel uniquely valued and safe to contribute their best. We mirror the diversity of the communities in which we operate and we ‘embrace different’ as one of our core values. We believe our differences make us stronger as a company and as individuals and we are committed to championing inclusivity in everything we do. This extends to our Candidates and is embedded in our Hiring Practices. You can find out more about our commitment to Diversity, Equity and Inclusion here and our DEI Policy here Schneider Electric is an Equal Opportunity Employer. It is our policy to provide equal employment and advancement opportunities in the areas of recruiting, hiring, training, transferring, and promoting all qualified individuals regardless of race, religion, color, gender, disability, national origin, ancestry, age, military status, sexual orientation, marital status, or any other legally protected characteristic or conduct Primary Location : IN-Karnataka-Bangalore Schedule : Full-time Unposting Date : Ongoing

Posted 1 week ago

Apply

0.0 years

0 Lacs

Bengaluru, Karnataka

On-site

- 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience - Experience with data visualization using Tableau, Quicksight, or similar tools - Experience with data modeling, warehousing and building ETL pipelines - Experience in Statistical Analysis packages such as R, SAS and Matlab - Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling SCOT AIM team is seeking an exceptional Business Intelligence Engineer to join our innovative Inventory automation analytics team. This pioneering role will be instrumental in building and scaling analytics solutions that drive critical business decisions across inventory management, supply chain optimization and channel performance. You will work closely with Scientists, Product Managers, other Business Intelligence Engineers, and Supply Chain Managers to build scalable, high insight - high impact products and own improvements to business outcomes within your area, enabling WW and local solutions for retail Key job responsibilities Work with Product Managers to understand customer behaviors, spot system defects, and benchmark our ability to serve our customers, improving a wide range of internal products that impact selection decisions both nationally and regionally. • Design and develop end-to-end analytics solutions to monitor and optimize supply chain metrics, including and not limited to availability, placement, inventory efficiency and capacity planning & management at various business hierarchies. • Create interactive dashboards and automated reporting systems to enable deep-dive analysis of inventory performance across multiple dimensions (ASIN/GL/Sub-category/LOB/Brand level). • Build predictive models for seasonal demand forecasting and inventory planning, supporting critical business events and promotions. • Create scalable solutions for tracking deal inventory readiness for small events and channel share management. • Partner with category & business stakeholders to identify opportunities for process automation and innovation. A day in the life • Pioneering new analytical approaches and establishing best practices. • Building solutions from the ground up with significant autonomy. • Driving innovation in supply chain analytics through automation and advanced analytics. • Making a direct impact on business performance through data-driven decision making. About the team Have you ever ordered a product on Amazon and when that box with the smile arrived, wondered how it got to you so fast? Wondered where it came from and how much it cost Amazon? If so, Amazon’s Supply Chain Optimization Technology (SCOT) organization is for you. At SCOT, we solve deep technical problems and build innovative solutions in a fast-paced environment working with smart & passionate team members. (Learn more about SCOT: http://bit.ly/amazon-scot) Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Experience working directly with business stakeholders to translate between data and business needs Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 1 week ago

Apply

0.0 years

0 Lacs

Bengaluru, Karnataka

On-site

DESCRIPTION SCOT AIM team is seeking an exceptional Business Intelligence Engineer to join our innovative Inventory automation analytics team. This pioneering role will be instrumental in building and scaling analytics solutions that drive critical business decisions across inventory management, supply chain optimization and channel performance. You will work closely with Scientists, Product Managers, other Business Intelligence Engineers, and Supply Chain Managers to build scalable, high insight - high impact products and own improvements to business outcomes within your area, enabling WW and local solutions for retail Key job responsibilities Work with Product Managers to understand customer behaviors, spot system defects, and benchmark our ability to serve our customers, improving a wide range of internal products that impact selection decisions both nationally and regionally. Design and develop end-to-end analytics solutions to monitor and optimize supply chain metrics, including and not limited to availability, placement, inventory efficiency and capacity planning & management at various business hierarchies. Create interactive dashboards and automated reporting systems to enable deep-dive analysis of inventory performance across multiple dimensions (ASIN/GL/Sub-category/LOB/Brand level). Build predictive models for seasonal demand forecasting and inventory planning, supporting critical business events and promotions. Create scalable solutions for tracking deal inventory readiness for small events and channel share management. Partner with category & business stakeholders to identify opportunities for process automation and innovation. A day in the life Pioneering new analytical approaches and establishing best practices. Building solutions from the ground up with significant autonomy. Driving innovation in supply chain analytics through automation and advanced analytics. Making a direct impact on business performance through data-driven decision making. About the team Have you ever ordered a product on Amazon and when that box with the smile arrived, wondered how it got to you so fast? Wondered where it came from and how much it cost Amazon? If so, Amazon’s Supply Chain Optimization Technology (SCOT) organization is for you. At SCOT, we solve deep technical problems and build innovative solutions in a fast-paced environment working with smart & passionate team members. (Learn more about SCOT: http://bit.ly/amazon-scot) BASIC QUALIFICATIONS 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with data modeling, warehousing and building ETL pipelines Experience in Statistical Analysis packages such as R, SAS and Matlab Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling PREFERRED QUALIFICATIONS Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Experience working directly with business stakeholders to translate between data and business needs Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies