We can provide custom software development solutions tailored to your specific needs. Custom software development involves creating software applications from scratch, designed specifically to address your unique requirements and business processes
Hyderabad
INR 15.0 - 20.0 Lacs P.A.
Remote
Full Time
Role & responsibilities Role Overview: We are looking for an experienced Tableau to Power BI Migration Developer who will lead and execute the migration of enterprise Tableau dashboards and reports to Power BI. The ideal candidate should have strong expertise in both Tableau and Power BI, with a proven track record of delivering large-scale BI migration projects, including semantic layer alignment, visual redesign, performance optimization, and user training. Responsibilities: Analyze existing Tableau dashboards/reports and source systems to define migration strategy to Power BI. Perform detailed mapping of Tableau workbook components (Datasources, Dimensions, Measures, Calculations, LODs) to Power BI semantic model. Re-design and re-build reports and dashboards in Power BI, ensuring parity with or improvement upon Tableau implementations. Optimize Power BI reports for performance, usability, and scalability. Collaborate with Data Engineering teams to ensure proper data integration and modeling in Power BI. Develop and maintain documentation on migration processes, transformations, and Power BI best practices. Conduct user acceptance testing (UAT) with business users and incorporate feedback. Support training and enablement sessions for end users transitioning from Tableau to Power BI. Identify and automate repeatable patterns to improve migration efficiency. Troubleshoot and resolve issues during the migration lifecycle. Required Skills & Qualifications: 5 to 7 years of overall experience in Business Intelligence and Data Visualization. 3+ years of hands-on experience with Power BI developing reports, dashboards, DAX calculations, Power Query transformations, and data modeling. 3+ years of hands-on experience with Tableau — developing dashboards, data extracts, calculated fields, LOD expressions, and advanced visualizations. Strong experience in Tableau to Power BI migrations or similar BI platform migrations. Solid understanding of data modeling principles (star schema, snowflake schema) and data visualization best practices. Proficiency in DAX , M language (Power Query) , and SQL . Experience working with enterprise data sources: Data Warehouses, Data Lakes, RDBMS, APIs. Familiarity with version control (Git) and CI/CD pipelines for Power BI artifacts is a plus. Strong analytical and problem-solving skills. Excellent communication and documentation skills. Preferred Skills: Experience with BI migration automation tools (desirable). Experience working in Agile delivery environments . Experience with Azure Data Platform or similar cloud BI architectures. Familiarity with Power BI service administration and workspace governance. Preferred candidate profile
Hyderabad
INR 15.0 - 22.5 Lacs P.A.
Remote
Full Time
Role & responsibilities We are looking for a highly skilled and self-driven Senior Python Developer with strong hands-on experience in Django and FastAPI to join our growing development team. This is a remote, full-time role based in India , working on scalable and high-performance backend systems. The ideal candidate is well-versed in building RESTful APIs, has experience with Dockerized deployments, and can work in a fast-paced, agile environment. Preferred candidate profile Design, develop, and maintain robust, scalable backend systems using Python , Django , and FastAPI . Develop and expose secure, well-documented REST APIs and/or GraphQL endpoints . Collaborate closely with frontend developers (ReactJS preferred) to integrate APIs and deliver cohesive user experiences. Design and optimize relational databases (PostgreSQL/MySQL) and NoSQL solutions as needed. Implement containerization and deployment pipelines using Docker and CI/CD tools. Ensure application security, performance, and responsiveness at scale. Participate in code reviews, testing, and system monitoring with a focus on best practices and maintainability. Work in Agile/Scrum teams and contribute to sprint planning, estimations, and delivery. 610 years of experience in backend development using Python . Proven expertise in Django and FastAPI frameworks. Experience building and consuming RESTful APIs . Solid knowledge of Docker and deploying containerized applications. Experience working with relational databases (e.g., PostgreSQL , MySQL ). Strong debugging, performance tuning, and problem-solving skills. Good understanding of software design principles , modular architecture , and clean code practices . Familiarity with version control systems like Git . Frontend integration experience, especially with ReactJS or similar JS frameworks. Familiarity with CI/CD pipelines and cloud platforms (AWS/GCP/Azure). Experience with asynchronous programming in Python. Exposure to unit testing , integration testing , and test-driven development (TDD) . Experience with logging/monitoring tools such as Prometheus, Grafana, or ELK stack.
Hyderabad
INR 15.0 - 25.0 Lacs P.A.
Remote
Full Time
Role & responsibilities We are seeking a talented and motivated Big Data Developer to design, develop, and maintain large-scale data processing applications. You will work with modern Big Data technologies, leveraging PySpark and Java/Scala, to deliver scalable, high-performance data solutions on AWS. The ideal candidate is skilled in big data frameworks, cloud services, and modern CI/CD practices. Preferred candidate profile Design and develop scalable data processing pipelines using PySpark and Java/Scala. Build and optimize data workflows for batch and real-time data processing. Integrate and manage data solutions on AWS services such as EMR, S3, Glue, Airflow, RDS, and DynamoDB. Implement containerized applications using Docker, Kubernetes, or similar technologies. Develop and maintain APIs and microservices/domain services as part of the data ecosystem. Participate in continuous integration and continuous deployment (CI/CD) processes using Jenkins or similar tools. Optimize and tune performance of Big Data applications and databases (both relational and NoSQL). Collaborate with data architects, data engineers, and business stakeholders to deliver end-to-end data solutions. Ensure best practices in data security, quality, and governance are followed. Must-Have Skills Proficiency with Big Data frameworks and programming using PySpark and Java/Scala Experience designing and building data pipelines for large-scale data processing Solid knowledge of distributed data systems and best practices in performance optimization Preferred Skills Experience with AWS services (EMR, S3, Glue, Airflow, RDS, DynamoDB, or similar) Familiarity with container orchestration tools (Docker, Kubernetes, or similar) Knowledge of CI/CD pipelines (e.g., Jenkins or similar tools) Hands-on experience with relational databases and SQL Experience with NoSQL databases (e.g., MongoDB, Cassandra, DynamoDB) Exposure to microservices or API gateway frameworks Qualifications Bachelors or Master’s degree in Computer Science, Engineering, or a related field 5+ years of experience in Big Data development Strong analytical, problem-solving, and communication skills Experience working in an Agile environment is a plus
Hyderabad
INR 10.0 - 20.0 Lacs P.A.
Remote
Full Time
We are seeking a skilled Azure Data Engineer with strong Power BI capabilities to design, build, and maintain enterprise data lakes on Azure, ingest data from diverse sources, and develop insightful reports and dashboards. This role requires hands-on experience in Azure data services, ETL processes, and BI visualization to support data-driven decision-making. Key Responsibilities Design and implement end-to-end data pipelines using Azure Data Factory (ADF) for batch ingestion from various enterprise sources. Build and maintain a multi-zone Medallion Architecture data lake in Azure Data Lake Storage Gen2 (ADLS Gen2), including raw staging with metadata tracking, silver layer transformations (cleansing, enrichment, schema standardization), and gold layer curation (joins, aggregations). Perform data processing and transformations using Azure Databricks (PySpark/SQL) and ADF, ensuring data lineage, traceability, and compliance. Integrate data governance and security using Databricks Unity Catalog, Azure Active Directory (Azure AD), Role-Based Access Control (RBAC), and Access Control Lists (ACLs) for fine-grained access. Develop and optimize analytical reports and dashboards in Power BI, including KPI identification, custom visuals, responsive designs, and export functionalities to Excel/Word. Conduct data modeling, mapping, and extraction during discovery phases, aligning with functional requirements for enterprise analytics. Collaborate with cross-functional teams to define schemas, handle API-based ingestion (REST/OData), and implement audit trails, logging, and compliance with data protection policies. Participate in testing (unit, integration, performance), UAT support, and production deployment, ensuring high availability and scalability. Create training content and provide knowledge transfer on data lake implementation and Power BI usage. Monitor and troubleshoot pipelines, optimizing for batch processing efficiency and data quality. Required Qualifications Bachelor's or Master's degree in Computer Science, Data Engineering, or a related field. 5+ years of experience in data engineering, with at least 3 years focused on Azure cloud services. Proven expertise in Azure Data Factory (ADF) for ETL/orchestration, Azure Data Lake Storage Gen2 (ADLS Gen2) for data lake management, and Azure Databricks for Spark-based transformations. Strong proficiency in Power BI for report and dashboard development, including DAX, custom visuals, data modeling, and integration with Azure data sources (e.g., DirectQuery or Import modes). Hands-on experience with Medallion Architecture (raw/silver/gold layers), data wrangling, and multi-source joins. Familiarity with API ingestion (REST, OData) from enterprise systems. Solid understanding of data governance tools like Databricks Unity Catalog, Azure AD for authentication, and RBAC/ACLs for security. Proficiency in SQL, PySpark, and data modeling techniques for dimensional and analytical schemas. Experience in agile methodologies, with the ability to deliver phased outcomes. Preferred Skills Certifications such as Microsoft Certified: Azure Data Engineer Associate (DP-203) or Power BI Data Analyst Associate (PL-300). Knowledge of Azure Synapse Analytics, Azure Monitor for logging, and integration with hybrid/on-premises sources. Experience in domains like energy, mobility, or enterprise analytics, with exposure to moderate data volumes. Strong problem-solving skills, with the ability to handle rate limits, pagination, and dynamic data in APIs. Familiarity with tools like Azure DevOps for CI/CD and version control of pipelines/notebooks. What We Offer Opportunity to work on cutting-edge data transformation projects. Competitive salary and benefits package. Collaborative environment with access to advanced Azure tools and training. Flexible work arrangements and professional growth opportunities. If you are a proactive engineer passionate about building scalable data solutions and delivering actionable insights, apply now. Role & responsibilities Preferred candidate profile
FIND ON MAP
Company Reviews
View ReviewsBrowse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.